Nutanix launches and a new era for data center computing is born — No SAN or NAS required! August 16, 2011Posted by ravimhatre in 2011, Cloud Computing, data, database, datacenter, enterprise infrastructure, Infrastructure, platforms, Portfolio Company blogs, startup, startups, Storage, Uncategorized.
Tags: data center, datacenter, nas, san, storage, virtualization, vmware
The Nutanix team (ex-Googlers, VMWare, and Asterdata alums) have been quietly working to create the world’s first high-performance appliance that enables IT to deploy a complete data center environment (compute, storage, network) from a single 2U appliance.
The platform also scales to much larger configurations with zero downtime or admin changes and users can run a broard array of mixed workloads from mail/print/file servers to databases to back-office applications without having to make upfront decisions about where or how to allocate their scare hardware resources.
For the first time an IT administrator in a small or mid-sized company or a branch office can plug in his or her virtual data center and be up/running in a matter of minutes.
Some of the most disruptive elements of Nutanix’s technology which enable the customer to avoid expensive SAN and NAS investments typically required for true data center computing are aptly described on company’s blog – http://www.nutanix.com/blog/.
Take a look. We believe this represents the beginning of the next generation in data center computing.
We continue to be very enthusiastic about the tremendous amount of opportunity in the Enterprise Infrastructure sector for 2011. In the past few years, we’ve seen significant innovation in technologies such as virtualization, flash memory and distributed databases and applications. When combined with business model shifts (cloud computing) and strong macroeconomic forces (reduced R&D budgets), a “perfect storm” is created where the IT ecosystem becomes ripe for disruption. Startups can take advantage of the changing seas and ride the subsequent waves to emerge as leaders in new categories. For this post, I’ll highlight three categories where I believe we’ll see significant enterprise adoption in 2011 – big data solutions, use cases for cloud and virtualizing the network. Startups in these categories are now at the point where ideas have become stable products and science experiments have transformed into solutions.
1. BIG DATA SOLUTIONS GROW UP
There’s been a lot of “big noise” about “Big Data” for the past couple of years but, there has been “little” clarity for the traditional Enterprise customer. Hadoop, Map Reduce, Cassandra, NoSQL – all interesting ideas, but what Enterprise IT needs is solutions. Solutions come when there are products optimized to solve the challenges with specific applications. Most of the exciting, fast growing technology companies we hear about daily (Facebook, Zynga, Twitter, Groupon, LinkedIn, Google, etc) are incredibly efficient data-centric businesses. These companies collect, analyze and leverage massive amounts of data and use it as a fundamental competitive weapon. In terms of really working with “Big Data,” Google started it. Larry and Serge taught the world that analyzing more information generates better results than any algorithm. These high-profile web companies created technologies to solve problems other companies had not faced before. In this copycat world we live in, Enterprise IT is ready to follow the consumer-tech leaders. The best enterprise companies are working hard to leverage vast amounts of data in order to make better decisions and deliver better products. At Lightspeed, we invested in companies like DataStax (www.datastax.com) and MapR Technologies (www.maprtech.com) because these are startups building solutions that enable Enterprise IT to work with promising Big Data platforms like Cassandra and Hadoop. With enterprise-grade solutions now available, I expect 2011 to be a year when tinkering leaps to full-scale engagement because these new platforms will deliver a meaningful advantage to Enterprise customers.
2. CLOUD COMPUTING FINDS ITS ENTERPRISE USE CASES
The hype around “Cloud Computing” is officially everywhere. My mom, who is in her sixties (sorry Mom) and just learned to text, recently asked me about Cloud Computing. Apparently she’s seen the commercials. In Enterprise IT circles and VC offices, there’s a lot of discussion around “Public” clouds vs. “Private” clouds; Infrastructure as a Service vs. Platforms as a Service; and the pros and cons of each. It’s all valuable theoretical debate, but people need to focus on the use cases and the specific economics of a particular “cloud” or platform configuration. As of right now, not every Enterprise IT use case fits the cloud model. In fact, most don’t. But there are three in particular that definitely do — application management, network and systems management and tier 2 and 3 storage. At Lightspeed, we’ve invested in a number of companies such as AppDynamics (www.appdynamics.com) and Cirtas (www.cirtas.com) which deliver solutions that are designed from the ground up to enable enterprise class customers to leverage the fundamental advantages of “Cloud Computing” – agility, leveraged resources, and a flexible cost model. Highly dynamic, distributed applications are being developed at an accelerating rate and represent an ideal use case for cloud environments when coupled with a solution like the one offered by AppDynamics which drives resource utilization based on application level demands. Similarly, Enterprise IT storage buyers have gotten smarter about tiering data among various levels of storage media, and infrequently accessed data is a great fit for cloud storage. Cloud controllers like the one offered by Cirtas enable enterprises to have the performance, security and reliability they are used to with traditional internal solutions but leverage the economics of the cloud.
3. VIRTUALIZING THE NETWORK
To date, the story of virtualization has been primarily about servers and storage. Tremendous innovation from VMware led the way for an entirely new set of companies to emerge in the data center infrastructure ecosystem. At Lightspeed, we talk about the fundamental pillars of the data center as application and systems management, servers, storage, and networking. Given all the advancement and activity around the first three, I think it’s about time the network caught up. As Enterprise IT continues to virtualize more of the data center and adopts cloud computing models (public or private), the network fundamentals are being forced to evolve as well. Networking solutions that decouple hardware from software are better aligned with the data center of the future. Companies such as Embrane (www.embrane.com) and Nicira Networks (www.nicira.com) are tackling this challenge head on and I believe 2011 will be the year where this fundamental segment of data center infrastructure starts to see meaningful momentum.
Top 5 trends for enterprise cloud computing in 2010 January 5, 2010Posted by ravimhatre in Cloud Computing, datacenter, enterprise infrastructure, Uncategorized, virtualization.
Tags: cloud, Cloud Computing, datacenter, enterprise IT, virtualization
by Ravi Mhatre, Lightspeed Venuter Partners
Lightspeed has invested across multiple enterprise infrastructure areas including database virtualization (Delphix), datacenter and cloud infrastructure (AppDynamics, Mulesoft, Optier) and storage virtualization (Fusion I/O, Pliant, Nimble).
This year we wanted to profile several important trends that we see emerging for Cloud Computing in 2010:
1. Enterprises move beyond experimentation with the cloud. Enterprises will start to deploy production cloud stacks with thousands of simultaneous VMs. They will increasingly be used as a resource for both pre-production and production workloads. CIOs and IT managers will test the benefits of creating and managing internal, elastic virtual datacenters – self-service, automated infrastructure with integrated and variable chargeback and billing capabilities, all built on commodity hardware.
2. Management software to deal with scaled cloud environments moves to the forefront. As infrastructure environments become increasingly dynamic and virtualized, the “virtual datacenter” or VDC will emerge as the new enterprise compute platform. New management platforms must be developed to apply policy and automation across thousands of transient servers, fluid underlying storage and networking resource pools, and variable workloads which often need to be dynamically migrated from one part of the VDC to another. Without new management tools, enterprises will fall short in their ability to achieve true “cloud economics” in their cloud environments.
3. Enterprise policy for dealing with public clouds starts to emerge. To counter the security and financial concerns around internal developers using public cloud providers such as Amazon on an ad hoc basis, CIOs and CFOs will start to craft their enterprises’ public cloud policies and centralize purchasing and procurement. Larger enterprises, due to security or compliance restrictions may initially prioritize internal private cloud development to recognize the benefits of cloud computing without compromising their data.
4. Public Clouds; “Its not just about Amazon”. Other mid and large-sized vendors (i.e. Microsoft, IBM, Rackspace, AT&T, Verizon, and others) continue to gain share in this rapidly growing market and 3’d party software matures which enables tier-2 and tier-3 service providers to get into the game of providing cloud services as a complement to traditional web and server hosting. EC2 becomes the commodity service offering as higher-end providers seek to differentiate their cloud offerings with SLA-based premium services and better management capabilities.
5. VMware has to rethink its business model. As Hyper-V,Xen and KVM continue to commoditize the hypervisor and gain enterprise market share, cloud computing starts to encroach on traditional ESX/vSphere use cases for application and server consolidation. Value continues to move up the stack into integrated management features and scale-out application support. To counter enterprise adoption of other hypervisor and cloud over-lay platforms, VMware will be forced to adjust pricing and licensing models to account for scale-out cloud deployments on top of hundreds or thousands of commodity servers.
Enterprise Infrastructure Predictions for 2009 December 5, 2008Posted by John Vrionis in Cloud Computing, datacenter, enterprise infrastructure, Uncategorized.
Barry Eggers and I teamed up again this year to make a few predictions about major trends to watch out for in Enterprise Infrastructure in 2009. But before we get into what we’re seeing in our crystal balls, we thought we should grade our 2008 Enterprise Infrastructure Predictions:
1. Flash-based storage makes a move towards the datacenter: A-
While 2008 was not “the year of the enterprise flash drive” as we suggested it might be, market momentum is clearly building. EMC and Sun announced enterprise storage offerings that incorporate flash drives. IBM and Dell have publicly declared their interests. Activity among private companies, including subsystems and systems companies, continues to increase.
2. Virtualization extends to the desktop: C
The big guys decided to supplement “make” with “buy” – MSFT bought Calista (a Lightspeed company) and Kidaro, VMWR purchased Thinstall. However, the market has been slower to develop than we initially predicted, and with big IT budgets constrained in 2009, we expect this market to slip into 2010 and beyond.
3. The Battle for the Top of the Rack (TOR) heats up: B+
CSCO and VMWR have decided to play nice for the time being, but there is a line of private companies that will battle CSCO in the near future, including high density 10G switching players like Arista (with a formidable team lead by ex-CSCO Jayshree Ullal and Andy B), Woven (lead by Ex-3COM exec Jeff Thurmond) and the I/O virtualization guys Aprius (a Lightspeed company), 3Leaf, and Xsigo. It’s still CSCO’s market to lose, but don’t count the private guys out.
And now, for the 2009 Enterprise Infrastructure Predictions:
It will, no doubt, be a challenging year for enterprise infrastructure, as with other sectors. The enterprise focus on green 2.0 (reduced energy usage) may be temporarily replaced by a focus on green 1.0 (as in money, reduced expenses, increased revenues, and short ROI periods). Despite the challenges, we do see some innovative ideas gaining traction:
1. Internal and external enterprise class clouds building momentum:
VMware is hyping its Datacenter OS and vCloud initiatives. IBM, MSFT, Sun, and HP have all indicated their enterprise class offerings will be ready for prime time in 2009. Expect increased marketing muscle touting key features – reliability, performance, security, SLAs – as differentiators (vs Amazon, Google and each other). We expect to see leading private companies emerge that are offering innovative software which enables enterprise customers to take the leap and benefit from the economic advantages of the enterprise cloud.
2. Hybrid Storage solutions gain mindshare with enterprise customers:
Given the growing trend of using flash storage in the Enterprise datacenter, expect to see an increase in innovative “Hybrid” solutions that combine flash storage with good old fashioned rotating disk drives. In these systems, the flash storage provides the “turbo” performance for apps that require it, while the rotating disks provide large amounts of inexpensive storage capacity for less demanding apps. Taken together, these hybrid systems aim to significantly reduce the total cost of storage while increasing performance and capacity.
3. The rise of serverless computing – two trends collide:
Hypervisors have made servers more efficient, allowing them to run multiple applications concurrently on the same system. In parallel, we have seen a monumental shift towards enterprise infrastructure apps, such as storage, security, and networking, running on standard servers (ie appliances) instead of proprietary hardware. Taking the two trends together, in 2009, we will see multiple appliances combined onto a common physical platform. More importantly, we will see enterprise infrastructure apps and compute apps combined into a common server platform within the datacenter. The computing vendors will view this as a way to offer and control enterprise applications. The enterprise application providers will view it as a way to do “serverless computing”. Either way, the customer wins. Less physical servers means lower upfront capex and lower TCO.
4. GPU computing starts getting serious attention (again):
Nvidia continues to develop and improve the interface to its GPUs which have hundreds of processing cores. The tantalizing possibilities for cost savings and application acceleration will drive further investigation into the possibilities of using GPUs for mainstream computing (despite previous hiccups from some venture backed companies). There are significant programming model obstacles to overcome, but we prefer to view that as the opportunity. Perhaps one of the Cloud providers will over GPU clusters as a high end service. What do you think Amazon?
Amazon’s EBS Will Push Enterprise Apps to the Cloud August 22, 2008Posted by John Vrionis in Cloud Computing, Infrastructure.
Amazon announced Thursday that its Elastic Block Store (EBS) feature is now available to all of its EC2 Web service customers. This is a big deal. The move rounds out Amazon’s offering and provides a full storage suite that is delivered as a service.
S3 and EC2 previously combined to be Amazon’s storage and compute services but lacked many of the critical elements needed to push enterprise class applications to the cloud. The previous holes didn’t matter much for non mission critical applications or batch jobs, but they were prohibitive for enterprise customers who need flexibility in the software stack and guaranteed service levels. With EBS, Amazon is sending a strong message and hosting providers should take notice – the Ecommerce giant is planning to be a significant player in the fight to offer cloud services to enterprise customers.
According to Amazon CTO, Werner Vogels, adding the Elastic Block Store offers customers “persistent, high-performance, high-availability block-level storage which you can attach to a running instance of EC2.” Vogels goes into further detail about the components of Amazon’s offering in his blog.
EBS volumes, which range from 1GB to 1TB, can be mounted as a file system or as raw storage. What this means is that customers can now deploy applications that work with any number of relational databases or file systems, not just the software stack specified by the cloud vendor (an example would be SimpleDB in Amazon’s case). This is critical to enterprise class customers because they are not going to lock themselves into a niche technology just to work with a certain service provider, while startups have been more willing to deal with this constraint because the cloud offered such a cost effective way to get an app up and running.
The EBS upgrade is a major move towards offering enterprise developers the flexible and reliable infrastructure they absolutely must have combined with the cheap and easy access to cloud resources they love. There are still plenty of questions about true reliability and performance that need to be answered before we see a massive migration, but this is a major step forward in providing the necessary building blocks.
Amazon was always going to be a tough competitor to beat on cost, but by adding flexibility and reliability to its story the implication for hosting providers is they will have to find another way to differentiate. My prediction is that we’ll see hosting providers attempting to move up the stack and looking to offer application management and monitoring solutions on top of their infrastructure as a way of separating themselves from Amazon. That’s good news for startups in that space like Singularity (a Lightspeed portfolio company), WeoCeo and Rightscale.