Silicon Valley Code Camp : October 5th and 6th 2013
My primary focus is working closely with important Enterprises and ISVs who can have a significant impact on Azure consumption.
This means as I act primarily as a cloud architect, driving the technical agenda to understand a partner's current technology footprint and strategizing the technical path.
White gloving strategically significant partners spans the technology spectrum:
- Docker containers: Includes orchestration (Swarm, Mesos, DC/OS, Kubernetes)
- Data and Analytics: Azure/Elastic Search, Big Data, Machine Learning, Predictive Analytics, Map Reduce, Spark, Storm, R
Internet of Things: Injestion, Raspberry PI, On-Premise Connectivity, Pub/Sub, Queues/Messaging
- Identity and Security: Active Directory, OAUTH, SSO, (Cloud & On-Premises)
- Web and Mobile: Apps published in Windows Store, experience with iOS/Android. Web Services (Java/Jersey, .NET/MVC Web API) and client tooling (jQuery, Bootstrap)
- Database: PostGres, MySQL, SQL Server, Oracle, MongoDB, Cassandra, DocumentDB, Redis
- Networking: Point to Site, Site to Site, Gateways, VNETS, ExpressRoute, Layer 4/7 LBs
- DevOps: Scripting provisioning and management of IaaS, PaaS (Node, Python, REST), CI/CD
MSDN Magazine, and have a published course on O'Reilly Media
This session is designed to give you a solid understanding of underpinnings and principles of Hadoop, perhaps the most sought after high-paying skill for a developer today. The in-depth session will begin by illustrating on how you build your own single node cluster of Hadoop on the Linux virtual machine (CentOS) for free so you can start learning immediately. The session will begin with a raw Linux VM and all the needed components will be downloaded and installed. You can be up and running quickly in the cloud within 30 minutes. This session will contain code and will show no more than a few slides. We will learn about writing the low-level map/reduce code in Java, which is really the low-level assembly language of Hadoop code. From there we will introduce more efficient approaches to analyzing big data with Hadoop, running high level queries and analyzing crime information from San Francisco as the example. We will create tables, import data, and group crime types all with a simple SQL like interface that is Hive. Finally, we will include a brief talk on PIG as well to round out the high level programming models and additional follow up materials so you can be up to speed on one of the most promising and financially rewarding skills today.