jump to navigation

How Nutanix is Disrupting the Market for Datacenter Infrastructure and Storage August 22, 2012

Posted by ravimhatre in 2012, datacenter, Infrastructure, virtualization.
Tags: , , , , , , , , , , ,
2 comments

Over the past two years we’ve seen a lot of disruption in the enterprise storage market with everything from the game-changing performance of flash to next-generation storage architectures required to support the cloud and virtualized data center environments

And notwithstanding some early wins from companies like Fusion-io, we believe the underlying data center compute,  storage and networking transformation is still in early innings of playing out.  A case in point is Nutanix, a company worth paying attention to and one where we recently led an oversubscribed growth round of financing.

So why are we so excited about Nutanix? We believe the company represents the next-generation of IT infrastructure – a CONVERGED storage and compute platform uniquely able to cost-effectively power the datacenters of today and tomorrow.

Nutanix combines enterprise-class compute and storage resources into a single, inexpensive x86 system. It also incorporates elastic scale-out technologies that have historically only been available to some of the world’s largest, and most technically sophisticated companies like Google and Facebook.  Now, for the first time, this revolutionary computing paradigm is being delivered to mid-range and large enterprises that are also looking to ride the disruptive economic wave afforded by cloud computing and large-scale virtualization.

The Nutanix magic is in its’ software which is highly sophisticated and delivers the world’s first SDS (Software-defined Storage system), similar to Nicira, also a LIghtspeed portfolio company, which built the world’s first SDN (Software Defined Network system).   The combination of SDS, inexpensive compute, and a radically simplified appliance form factor which is easy to deploy and manage has customers excited and highly engaged.  They are calling Nutanix Complete the world’s first “datacenter in a box.”

Beyond the technology, we have been hugely impressed by the team at Nutanix. We’ve had the privilege of working with them from the earliest days of the company when Lightspeed originally lead the Series A financing more than two and half years ago. The founders came to us with an extremely bold vision to redefine datacenter storage and computing and we’re incredibly excited to see how emphatically the market is now embracing this vision.

If you found this post useful, follow me at @rmtacct and follow Lightspeed at @lightspeedvp on Twitter.

Enterprise Infrastructure – What we are working on at Lightspeed in 2011 February 8, 2011

Posted by John Vrionis in 2011, Cloud Computing, database, datacenter, enterprise infrastructure, Storage, Uncategorized, virtualization.
2 comments

We continue to be very enthusiastic about the tremendous amount of opportunity in the Enterprise Infrastructure sector for 2011. In the past few years, we’ve seen significant innovation in technologies such as virtualization, flash memory and distributed databases and applications. When combined with business model shifts (cloud computing) and strong macroeconomic forces (reduced R&D budgets), a “perfect storm” is created where the IT ecosystem becomes ripe for disruption. Startups can take advantage of the changing seas and ride the subsequent waves to emerge as leaders in new categories. For this post, I’ll highlight three categories where I believe we’ll see significant enterprise adoption in 2011 – big data solutions, use cases for cloud and virtualizing the network. Startups in these categories are now at the point where ideas have become stable products and science experiments have transformed into solutions.

1. BIG DATA SOLUTIONS GROW UP

There’s been a lot of “big noise” about “Big Data” for the past couple of years but, there has been “little” clarity for the traditional Enterprise customer. Hadoop, Map Reduce, Cassandra, NoSQL – all interesting ideas, but what Enterprise IT needs is solutions. Solutions come when there are products optimized to solve the challenges with specific applications. Most of the exciting, fast growing technology companies we hear about daily (Facebook, Zynga, Twitter, Groupon, LinkedIn, Google, etc) are incredibly efficient data-centric businesses. These companies collect, analyze and leverage massive amounts of data and use it as a fundamental competitive weapon. In terms of really working with “Big Data,” Google started it. Larry and Serge taught the world that analyzing more information generates better results than any algorithm. These high-profile web companies created technologies to solve problems other companies had not faced before. In this copycat world we live in, Enterprise IT is ready to follow the consumer-tech leaders. The best enterprise companies are working hard to leverage vast amounts of data in order to make better decisions and deliver better products. At Lightspeed, we invested in companies like DataStax (www.datastax.com) and MapR Technologies (www.maprtech.com) because these are startups building solutions that enable Enterprise IT to work with promising Big Data platforms like Cassandra and Hadoop. With enterprise-grade solutions now available, I expect 2011 to be a year when tinkering leaps to full-scale engagement because these new platforms will deliver a meaningful advantage to Enterprise customers.

2. CLOUD COMPUTING FINDS ITS ENTERPRISE USE CASES

The hype around “Cloud Computing” is officially everywhere. My mom, who is in her sixties (sorry Mom) and just learned to text, recently asked me about Cloud Computing. Apparently she’s seen the commercials. In Enterprise IT circles and VC offices, there’s a lot of discussion around “Public” clouds vs. “Private” clouds; Infrastructure as a Service vs. Platforms as a Service; and the pros and cons of each. It’s all valuable theoretical debate, but people need to focus on the use cases and the specific economics of a particular “cloud” or platform configuration. As of right now, not every Enterprise IT use case fits the cloud model. In fact, most don’t. But there are three in particular that definitely do — application management, network and systems management and tier 2 and 3 storage. At Lightspeed, we’ve invested in a number of companies such as AppDynamics (www.appdynamics.com) and Cirtas (www.cirtas.com) which deliver solutions that are designed from the ground up to enable enterprise class customers to leverage the fundamental advantages of “Cloud Computing” – agility, leveraged resources, and a flexible cost model. Highly dynamic, distributed applications are being developed at an accelerating rate and represent an ideal use case for cloud environments when coupled with a solution like the one offered by AppDynamics which drives resource utilization based on application level demands. Similarly, Enterprise IT storage buyers have gotten smarter about tiering data among various levels of storage media, and infrequently accessed data is a great fit for cloud storage. Cloud controllers like the one offered by Cirtas enable enterprises to have the performance, security and reliability they are used to with traditional internal solutions but leverage the economics of the cloud.

3. VIRTUALIZING THE NETWORK

To date, the story of virtualization has been primarily about servers and storage. Tremendous innovation from VMware led the way for an entirely new set of companies to emerge in the data center infrastructure ecosystem. At Lightspeed, we talk about the fundamental pillars of the data center as application and systems management, servers, storage, and networking. Given all the advancement and activity around the first three, I think it’s about time the network caught up. As Enterprise IT continues to virtualize more of the data center and adopts cloud computing models (public or private), the network fundamentals are being forced to evolve as well. Networking solutions that decouple hardware from software are better aligned with the data center of the future. Companies such as Embrane (www.embrane.com) and Nicira Networks (www.nicira.com) are tackling this challenge head on and I believe 2011 will be the year where this fundamental segment of data center infrastructure starts to see meaningful momentum.

Top 5 trends for enterprise cloud computing in 2010 January 5, 2010

Posted by ravimhatre in Cloud Computing, datacenter, enterprise infrastructure, Uncategorized, virtualization.
Tags: , , , ,
4 comments

by Ravi Mhatre, Lightspeed Venuter Partners

Lightspeed has invested across multiple enterprise infrastructure areas  including database virtualization (Delphix), datacenter and cloud infrastructure (AppDynamics, Mulesoft, Optier) and storage virtualization (Fusion I/O, Pliant, Nimble).

This year we wanted to profile several important trends that we see emerging for Cloud Computing in 2010:
1. Enterprises move beyond experimentation with the cloud. Enterprises will start to deploy production cloud stacks with thousands of simultaneous VMs. They will increasingly be used as a resource for both pre-production and production workloads.  CIOs and IT managers will test the benefits of creating and managing internal, elastic virtual datacenters – self-service, automated infrastructure with integrated and variable chargeback and billing capabilities, all built on commodity hardware.

2.  Management software to deal with scaled cloud environments  moves to the forefront. As infrastructure environments become increasingly dynamic and virtualized, the “virtual datacenter” or VDC will emerge as the new enterprise compute platform.  New management platforms must be developed to apply policy and automation across thousands of transient servers,  fluid underlying storage and networking resource pools, and variable workloads which often need to be dynamically migrated from one part of the VDC to another. Without new management tools, enterprises will fall short in their ability to achieve true “cloud economics” in their cloud environments.


3. Enterprise policy for dealing with public clouds starts to emerge.
To counter the security and financial concerns around internal developers using public cloud providers such as Amazon on an ad hoc basis, CIOs and CFOs will start to craft their enterprises’ public cloud policies and centralize purchasing and procurement.  Larger enterprises, due to security or compliance restrictions may initially prioritize internal private cloud development to recognize the benefits of cloud computing without compromising their data.

4. Public Clouds; “Its not just about Amazon”. Other mid and large-sized vendors (i.e. Microsoft, IBM, Rackspace, AT&T, Verizon, and others) continue to gain share in this rapidly growing market and 3’d party software matures which enables tier-2 and tier-3 service providers to get into the game of providing cloud services as a complement to traditional web and server hosting. EC2 becomes the commodity service offering as higher-end providers seek to differentiate their cloud offerings with SLA-based premium services and better management capabilities.

5. VMware has to rethink its business model. As Hyper-V,Xen and KVM continue to commoditize the hypervisor and gain enterprise market share,  cloud computing starts to encroach on traditional  ESX/vSphere use cases for application and server consolidation.   Value continues to move up the stack into integrated management features and scale-out application support. To counter enterprise adoption of other hypervisor and cloud over-lay platforms, VMware will be forced to adjust pricing and licensing models to account for scale-out cloud deployments on top of hundreds or thousands of commodity servers.

2008 Enterprise Infrastructure Predictions December 5, 2007

Posted by jeremyliew in 2008, datacenter, enterprise infrastructure, flash, virtualization.
4 comments

Following up on our consumer internet and mobile predictions, my partner Barry Eggers looks into the crystal ball for 2008 to draw some predictions about enterprise infrastructure:

1. Flash-based storage makes a move towards the datacenter. The last bastion of moving parts in the datacenter – rotating disk drives – will start to feel the heat (no pun intended) as flash-based storage solutions make their way into the enterprise. For the last few decades, rotating disks have dominated enterprise storage the way legendary John Wooden’s teams dominated their collegiate foes. While solid state disk drives based on DRAM have been around since Wooden was carrying a lineup card in his hand, they have been relegated to niche performance-oriented applications, the equivalent of playing backup center to Lew Alcindor. But Flash memory could change all that. Flash-based storage, whose cost/GB is rapidly approaching magnetic disks, offers the additional benefits of 10X the performance, higher storage densities, and last but not least to datacenter enthusiasts, significantly lower power per I/O. All of this could propel SSD 2.0 into the mainstream. Sure, rotating disk drive companies will add flash memory caches and wave their magic marketing wands, but industry insiders will tell you it’s not enough. Innovation will come from a small group of companies that are solving the limitations of flash as it applies to enterprise users. Expect to see hybrid systems, based on a combination of flash and rotating disk, coming to a datacenter near you. Anyone for Green Storage?

2. Virtualization extends to the desktop.
What is good for the server must be good for the desktop, right? Well, yes, but for a different reason. Server virtualization drives higher utilization on machines possessing an ever-increasing number of cores. Desktop Virtualization is not necessarily about utilization. Furthermore, desktop virtualization is not thin client 2.0. Thin Clients were about reducing up-front capital costs with a slimmed-down hardware client. Desktop Virtualization is about intelligently provisioning applications to desktop users. It’s about management, security, compliance, and reducing Total Cost of Ownership. Desktop Virtualization will also be more powerful to end users when used in conjunction with virtualized servers. The limitation with first generation virtualized desktops is that they offer a user experience much less satisfying than a full desktop, but that will begin to change in 2008…stay tuned for more details…

3. The Battle for the Top of the Rack (TOR) heats up.
As server racks are populated with more cores per CPU and more VMs per core, memory and network I/O limitations will become priority concerns. How VMs share those physical resources will impact overall system performance and significantly influence the rate at which mission critical applications are run in virtualized environments. It’s a challenge most adopters of virtualization don’t deal with yet, but many vendors are working on solutions in anticipation of the time when well-utilized CPUs shift the datacenter bottleneck to memory and I/O. Whether the answer comes from software solutions that are internal to the server (advantage software companies) or additional hardware (perhaps a TOR solution) dedicated to managing network services and optimizing physical resource sharing (advantage system players), it represents a meaningful battle for a critical position in the next generation data center.

Later in the week, Cleantech.