MapLarge provides Big Data Analytics and Visualization to make us safer, more productive, and more efficient. The Maplarge API provides real time geospatial analytics for over 15 billion location events per day and trillions of historical events. Analysts using our geospatial visualization capability can instantly visualize and publish for data discovery and model testing on any desktop, mobile or tablet device. What We Do and Who We Need Mandatory Skills: While we hire all kinds of people and invent roles to fit them, there are three key traits we require of every team member that really define our team. 1. Fun to Work With - Life is short and work should be fun. 2. Extremely Smart - we work on cutting edge hard problems and we need people who can keep up. 3. Passionate Engineers - we need people who really love programming and exhibit the energy and creativity that comes from being fully engaged in what you do. Types of Roles: We need talented people to help add cool new features to our platform. Below are the general categories of tasks we are working on, but often people straddle multiple areas so take them more as the general "gist" of what you might do with us. We are looking for smart, hard working, fun people and we will invent the right role to fit them. (1) Client Side - Interactive Visuals: We maintain a JavaScript API for for interactive Data Visualization using our high performance web services. We need people both to help add new dynamic animated UI components and other features to the JS API. We also need people to build data driven visuals for our demo gallery if you are artistically or data analysis inclined. Enjoy working with Knockout, Angular, D3.js, HTML5? Check our our galleries to see the kind of stuff you will be working on http://maplarge.com/demos (2) Full Stack Web Services - Full stack developers who are comfortable working on both client and server side to produce high performance applications powered by scalable web services are the backbone of our team. We run c# in both .net and mono server side and also maintain a flexible client side API with a large library of reusable components. We host and run large deployments that sometimes scale to thousands of computers for scientific/industrial/gov users and in a typical day process data over 15 billion records from 110 million streaming data sources. (3) Algorithm / Core Database Development: We built our own in memory database and distributed data analytics pipeline from the ground up with funding from DARPA, and we have a team of really talented researchers working on Pattern analysis, Streaming analytics, Spatial queries and Network graph functions that power our visual engines. We are always looking for practical people who love writing highly optimized code that straddles the line between research and software development. We don't usually publish scientific papers, but we are right out on the edge pulling algorithms out of the latest research papers finding really fast "good enough" algorithms that let us tease interesting patterns out of data without getting lost in "science experiment land"