Crunchbase Data Mashed Into Microsoft Pivot

image About two weeks ago I had the good fortune to spend some time at an offsite where I met Gary Flake.  I remember reading the Wired Magazine cover piece on Gary a few years back, but didn’t didn’t have any idea who he was when I was introduced to him at the offsite.  As one of Microsoft’s Technical Fellows, he’s basically one of the 20 or so smartest engineers in the company.  Spending time with a guy like that is a treat, and this guy thinks about stuff that gets me excited.  Data and systems.

It’s a good thing Gary is so good at his job, because when he gave me the initial pitch for Pivot I thought it sounded about as interesting as a new sorting algorithm [NOTE: the downloads are restricted to token holders, so if you are interested in getting Pivot, hit me up on Twitter and I will get you one].  It wasn’t a great pitch.  Only after I saw the software in action, and lifting my jaw off the floor, did I run back over to Gary and offer to rewrite his 25 word pitch.  My motives were not all together altruistic.  I wanted access to the software, but more importantly I wanted access to the tools to create my own data sets.

The unofficial, not blessed by Microsoft, but how I would talk about Pivot is: a client application to explore user created data sets along multiple criteria in a rich, visual way.  In short, it’s Pivot Tables + Crack + WPF.  The demo datasets that Gary was showing were interesting, but nothing about the data was actionable.  It was informational, but not insight generating.  My brain jumped to dumping CRM data into Pivot…or a bug database…or a customer evidence set.  Things that were actionable, traditionally hard to search, and would benefit from a visual metaphor.  Then, like a ton of bricks, it hit me.  What about Crunchbase?

Spend a few minutes wandering around Crunchbase, and you realize what an incredibly rich dataset they have assembled, and yet the search and browse interface could be better.  It’s rather simplistic and it’s not possible to dive deeper into a search to refine it.  So that was my project.  I was going to use the Crunchbase API to generate a dataset for Pivot.  Sounded simple enough.  Here’s how I did it, and the result.  (here’s a link to the CXML for those of you with the Pivot browser and who want to see mine in action – WARNING: It takes about 20 seconds to load).

The Code

I have created a CodePlex project for the CrunchBase Grabber, and welcome any additions to the project.

The first problem I had to solve was how to take the JSON objects down and use them in C#.  I normally would have done something like this in Python and used the SimpleJSON library, but I really wanted to do a soup to nuts C# project, and walk a mile in my customers’ shoes.  It turns out that we have a pretty good object for doing just this.  In the System.Web.Script.Serialization assembly (for which you have to add the reference to System.Web) there is a nice JavaScriptSerializer object.  This was nice to use, but the web information was a bit confusing.  It appears that this was deprecated in .NET 3.0, and then brought back in 3.5.  It’s back and it works.

What I liked about the JavaScriptSerializer was that it could take an arbitrary JSON object in as a stream, and then deserialize to an object of my creation.  I only needed to include the fields that I wanted from the object, so long as the names mapped to the items in the JSON object.  That made creating a custom class for just the data I wanted much easier than enumerating all of the possible data types.

        public string name;
        public string permaLink;
        public string homepage_url;
        public string crunchbase_url;
        public string category_code;
        public string description; // = "";
        public int? number_of_employees; // = 0;
        public string overview;
        public bool deadpool;
        public int? deadpool_year; //= "";
        public imgStructure image;
        public List<locStructure> offices;
        public string tag_list;
        public int? founded_year;
        public List<fndStructure> funding_rounds;
        public Dictionary<string, fndStructure> aggFundStructure = new Dictionary<string,fndStructure>();
        public List<string> keyword_tags;

There’s a couple of things I want to share which will make life a lot easier for you if you plan on using this JavaScriptSerializer.  First, know how to make a type nullable.  If you don’t know what that means, here’s the short version: for any type other than a string, add that “?” after the type and that will allow you to assign a null type to it.  Why is this important?  During the deserialization process, you are bound to hit null types from the stream.  This is especially true if you aren’t in control of the stream, as I wasn’t with Crunchbase.  That’s 4 hours of frustration from my life I just saved you.  I left my comments in there to show you I tried all kinds of things to solve this “assignment of null” exception, and none of them work.  Just use the “?”

Second is understanding the created data types that were used.  Most JSON objects will have nested data structures.  When that happens, you will need to have a new data type that you create with the same name of the data coming back from the object.  In this example, let’s look at the image data:

    public class imgStructure
    {
        public List<List<object>> available_sizes;
        public string attribution;
    }

The available_sizes actually comes back with a set of sizes and a relative file location.  Because there are numbers and text, the List of type object had to be used.  That’s another 3 hours I just saved you.  Here’s the JSON that came back so you can see what I mean:

 "image":
  {"available_sizes":
    [[[150,
       41],
      "assets/images/resized/0000/2755/2755v28-max-150x150.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-250x250.png"],
     [[220,
       61],
      "assets/images/resized/0000/2755/2755v28-max-450x450.png"]],
   "attribution": null},

Getting at that data would prove difficult.

return baseURL + this.image.available_sizes[1][1].ToString();

 

Because I wanted the middle sized logo, and the location, I had to use the [1][1] to get a string.  Had I wanted the sizes, I would have needed a [1][0][0] or [1][0][1] because the first [0] returns the object which is an array.  Yes, it’s confusing and annoying, but if you know what you want, navigating the complex nested data type can be done.

There were actually two JSON streams I needed to parse.  The first was the Company list, which I retrieved by creating a CompanyGenerator class, which creates the WebRequest to the API to get the company list JSON and then parses that list into a list of company objects.

    public class CompanyGenerator
    {
        //this is how we call out to crunchbase to get their full list of companies
        public List<cbCompanyObject> GetCompanyNames()
        {
            string jsonStream;
            JavaScriptSerializer ser = new JavaScriptSerializer();

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create("http://api.crunchbase.com/v/1/companies.js");

            jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();

            //as opposed to the single company calls, this returns a list of companies, so we have to
            //stick it into a list
            List<cbCompanyObject> jsonCompanies = ser.Deserialize<List<cbCompanyObject>>(jsonStream);

            return jsonCompanies;
        }
        
    }

Once I had that list, it was a simple matter of iterating over the list and fetching the individual JSON objects per company.

            foreach (cbCompanyObject company in companyNames)
            {
                string jsonLine;

                //with a company name parsed from JSON, create the stream of the company specific JSON
                jsonStream = cjStream.GetJsonStream(company.name);

                if (jsonStream != null)
                {
                    try
                    {
                        //with the stream, now deserialize into the Crunchbase object
                        CrunchBase jsonCrunchBase = ser.Deserialize<CrunchBase>(jsonStream);

                        //assuming that worked, we need to clean up and create some additional meta data
                        jsonCrunchBase.FixCrunchBaseURL();
                        jsonCrunchBase.AggregateFunding();
                        jsonCrunchBase.SplitTagString();

 

Those functions FixCrunchBaseURL(), AggregateFunding() and SplitTagString() were post processing functions meant to get more specific data for my needs.  The AggregateFunding() function was really good times, and an exercise for the reader should you want to enjoy the fun of trying to parse an arbitrary number of nested objects for funding events, and assigning the funding to the right type, and summing the total funding per round.

Since the data is all user generated, and there’s no guarantee that the data is reliable, I had to trap the exception of a company URL simply not existing:

            WebRequest wrGetURL;
            wrGetURL = WebRequest.Create(apiUrlBase + companyName + urlEnd);

            try
            {
                jsonStream = new StreamReader(wrGetURL.GetResponse().GetResponseStream()).ReadToEnd();
                return jsonStream;
            }
            catch (System.Net.WebException e)
            {
                Console.WriteLine("Company: {0} - URL bad", companyName);
            }

I thought it strange that the company list would return permalinks to companies that are no longer listed in Crunchbase or have a JSON dataset, but as long as you trap the event, things are fine.  Once the data came back and I put it into the object, I could selectively dump data to a text file.

So that’s a simple walk through of how my code accessed the CrunchBase API in preparation for creating my Pivot data set.  Again, I have created a CodePlex project for the CrunchBase Grabber and welcome additions.

Data Set Creation

Knowing what I knew about how the Excel add in worked, I created my text file to have well defined delimiters and column headings.  I couldn’t sort out how to import the HTML which was returned in the JSON for the Company Overview and not have Excel puke on the import.  That’s a nice to have that I will get to at a later time.

It turns out that using the tool to create the columns is less error prone than simply trying to insert them yourself.

image By creating the columns ahead of time, I could simply copy and past from my imported tab delimited file into my Pivot collection.  Here’s another tip – if you have a lot of image locations that are sitting on a server offsite (as in, on the Crunchbase servers) save that copy and paste for last.  By inserting the URLs into the Pivot data set XLS, the Pivot add-in will try to go fetch all of the images, which can take some time.

I processed my text file down from about 15K good entries down to about 4K.  The first criteria was that the company had to have a logo.  Second, it had to have funding and had to have a country, a founding year, and a category listed.  I had been given the heads up that anything more than about 5K objects in a single CXML file would be problematic.

image I also wanted to ensure that some of the fields were not used for filtering but did show up in the information panel.  Luckily the tool made this pretty simple.  By simply moving the cursor to the desired column, you can tick off the check boxes to change where data will appear and how it can be used by the user.  This is a nice touch of the Excel add in tool.

Once the data was all in, I clicked the “Publish Collection” button and wandered off for an age or two.  It took, erm, a little bit of time, even on my jacked up laptop, to process the collection and create the CXML file.  If you have access to the Pivot app, you can point your browser at this URL to see the final result.  For those of you who don’t have access to the Pivot Browser, I have included a few screen caps to show what the resulting dataset looked like.

clip_image001

The first shot is what the full data set renders to in the window.  That’s all 4000 companies, and the Pivot criteria are on the left.  The really cool thing about Pivot is the way you can explore a data set.  Start with the full set of companies, and pivot on the web companies.  Refine that to be only companies in CA and WA.  Decide that you want companies funded between 2004 and 2006, and only those that had between $2 million and $5 million.  You can do that, in real time, and all the data reorganizes itself.  Then you can click on a company logo and get additional information.  Another example screen cap.

clip_image001[6]

All of the filtering happens in real time, and utilizes the DeepZoom technology.  When you change your query criteria, any additional data is fetched via simple HTTP requests, and it’s all quite fast.  For those of you with the Pivot app, you can see how quickly this exploration renders once you have loaded the CXML.

For my Pivot data set, I opted to allow the search to pivot on were: company category, number of employees, city, state, country, year funded, total funding, and keyword tags.  It makes for some good data deep dive.  I want my next iteration to have funding companies as a pivot point as well.  Would be nice to see which investors are in bed together the most.

Put simply, I am stunned by this technology.  I have barely scratched the surface of what is possible with building data sets for Pivot.  I plan to spend quite a bit of my free time in the next few weeks playing with this and thinking about additional data sources to plug into this.  I love that we are building such cool stuff at our company, and I love how accessible it was to an inquisitive mind.  I cannot wait to see what other data sets get created.