The Future of Data Storage September 7, 2012Posted by krishparikh in 2012, big data, enterprise 2.0, enterprise infrastructure, startups, Storage.
Tags: Data Storage, Event, Flash Memory, Nimble Storage, Nutanix, Tintri, XtremIO
1 comment so far
In the so-called age of “big data”, enterprises will need to contend not only the sheer volume of data they generate (ranging from hundreds to thousands of terabytes), but also to manage the velocity and variety of these new data streams. (1) To put these numbers in perspective, imagine each enterprise storing and analyzing data equivalent to the volume of information catalogued by the US Library of Congress every year! (2)
Recognizing that this explosion of storage growth cannot be managed by legacy infrastructure, both investors and storage vendors are betting on flash memory as the technology to keep pace with the growing data challenges faced by enterprises. Incumbents EMC and IBM have recently made strategic acquisitions in all-flash storage companies XtremIO and Texas Memory Systems to augment their legacy storage solutions. Meanwhile startups Pure Storage and Nutanix have raised large rounds of growth financing, further validating that investors are also bullish on the flash storage trend.
We at Lightspeed were early believers in the disruptive power of flash memory in next-generation storage systems. (3) The decreasing cost of flash memory driven by widespread adoption in consumer devices, coupled with data access and retrieval times 10-100x faster than rotating disk, and a power and physical footprint 10 times smaller than disk well positioned flash to be the transformative storage technology in the datacenter. Our early investments in component technologies (Link-a-Media, Pliant Technology, Fusion-io), systems companies (XtremIO), and software technologies (IO Turbine) centered around flash memory have validated that hypothesis.
To better understand the role of flash memory and its impact on performance, capacity, energy usage, and cost in next-generation storage systems, I invite you to join me at the Future of Data Storage event on September 18 in San Francisco. Hosted by BTIG and moderated by Andrew Reichman, principal analyst with Forrester Research covering infrastructure and storage technologies, the event will bring together five leading companies focused on driving innovation around data storage in the enterprise:
- Nimble Storage is creating hybrid storage systems that converge primary storage, backup storage, and data protection technology in a single appliance.
- Nutanix is creating converged storage and compute appliances that allow enterprises to build Google-like, scale-out datacenters
- Pure Storage is creating all-flash enterprise storage arrays focused on delivering high performance at cost effective price points.
- Tintri is creating storage systems optimized for virtual machines, improving the manageability and cost-effectiveness of virtualized workloads.
- Virident is creating PCIe flash accelerator cards that allow frequently used data to sit closer to the CPU in servers.
As we look toward the future, startups will continue to innovate around flash memory, creating next-generation storage systems stitched together with intelligent software to disrupt existing markets based on disk architectures.
If you are interested in joining us at the event please email eventRSVP@lsvp.com along with your name and contact information. Webcasting will also be available.
I look forward to exploring these trends further during the Future of Data Storage event from the lens of five emerging startups – hope to see you there!
(1) McKinsey Global Institute Report “Big data: The next frontier for innovation, competition, and productivity”
(2) Library of Congress Website, January 2012 Data: As of January 2012, the Library has collected about 285 terabytes of web archive data growing at a rate of about 5 terabytes per month.
Follow us on twitter at @lightspeedvp for more information on the future of storage and events like these.
The Enterprise Flash Market is Taking Off May 10, 2012Posted by Barry Eggers in enterprise infrastructure, flash.
add a comment
Congratulations to Lightspeed portfolio company XtremIO and founders Ehud, Shuki, and Yaron for their acquisition which was announced today. This is another reminder of just how fast flash has gone from promising consumer technology to mainstream enterprise building block – primarily fueled by the proliferation of virtualization, BI, and Big Data coupled with the rapid decline in flash pricing.
We made our first investment in an enterprise flash company (Pliant Technology) in 2007; in 2008 we blogged about the potential for this technology to disrupt the storage market and in 2010 predicted that it could be the next $10 Billion dollar IT market.
Venture dollars have flowed into the enterprise flash market over the last few years and, to date, there have been 7 major liquidity events for venture-backed companies: FusionIO’s IPO (a Lightspeed portfolio company), Pliant (a Lightspeed portfolio company acquired by Sandisk), Sand Force (LSI), Anobit (Apple), IOTurbine (a Lightspeed portfolio company acquired by FusionIO), Flashsoft (Sandisk), and XtremeIO (a Lightspeed portfolio company). There are also a number of promising companies emerging in this market, including Violin Memory, Nimble Storage (a Lightspeed portfolio company), Pure Storage, Solidfire, Tintri (a Lightspeed portfolio company), Avere, Kaminario, and Nutanix (a Lightspeed portfolio company) as well as many others in development phases.
Despite all of the exit activity, we are in the early innings of this market – stay tuned for more fireworks…
Nutanix launches and a new era for data center computing is born — No SAN or NAS required! August 16, 2011Posted by ravimhatre in 2011, Cloud Computing, data, database, datacenter, enterprise infrastructure, Infrastructure, platforms, Portfolio Company blogs, startup, startups, Storage, Uncategorized.
Tags: data center, datacenter, nas, san, storage, virtualization, vmware
The Nutanix team (ex-Googlers, VMWare, and Asterdata alums) have been quietly working to create the world’s first high-performance appliance that enables IT to deploy a complete data center environment (compute, storage, network) from a single 2U appliance.
The platform also scales to much larger configurations with zero downtime or admin changes and users can run a broard array of mixed workloads from mail/print/file servers to databases to back-office applications without having to make upfront decisions about where or how to allocate their scare hardware resources.
For the first time an IT administrator in a small or mid-sized company or a branch office can plug in his or her virtual data center and be up/running in a matter of minutes.
Some of the most disruptive elements of Nutanix’s technology which enable the customer to avoid expensive SAN and NAS investments typically required for true data center computing are aptly described on company’s blog – http://www.nutanix.com/blog/.
Take a look. We believe this represents the beginning of the next generation in data center computing.
We continue to be very enthusiastic about the tremendous amount of opportunity in the Enterprise Infrastructure sector for 2011. In the past few years, we’ve seen significant innovation in technologies such as virtualization, flash memory and distributed databases and applications. When combined with business model shifts (cloud computing) and strong macroeconomic forces (reduced R&D budgets), a “perfect storm” is created where the IT ecosystem becomes ripe for disruption. Startups can take advantage of the changing seas and ride the subsequent waves to emerge as leaders in new categories. For this post, I’ll highlight three categories where I believe we’ll see significant enterprise adoption in 2011 – big data solutions, use cases for cloud and virtualizing the network. Startups in these categories are now at the point where ideas have become stable products and science experiments have transformed into solutions.
1. BIG DATA SOLUTIONS GROW UP
There’s been a lot of “big noise” about “Big Data” for the past couple of years but, there has been “little” clarity for the traditional Enterprise customer. Hadoop, Map Reduce, Cassandra, NoSQL – all interesting ideas, but what Enterprise IT needs is solutions. Solutions come when there are products optimized to solve the challenges with specific applications. Most of the exciting, fast growing technology companies we hear about daily (Facebook, Zynga, Twitter, Groupon, LinkedIn, Google, etc) are incredibly efficient data-centric businesses. These companies collect, analyze and leverage massive amounts of data and use it as a fundamental competitive weapon. In terms of really working with “Big Data,” Google started it. Larry and Serge taught the world that analyzing more information generates better results than any algorithm. These high-profile web companies created technologies to solve problems other companies had not faced before. In this copycat world we live in, Enterprise IT is ready to follow the consumer-tech leaders. The best enterprise companies are working hard to leverage vast amounts of data in order to make better decisions and deliver better products. At Lightspeed, we invested in companies like DataStax (www.datastax.com) and MapR Technologies (www.maprtech.com) because these are startups building solutions that enable Enterprise IT to work with promising Big Data platforms like Cassandra and Hadoop. With enterprise-grade solutions now available, I expect 2011 to be a year when tinkering leaps to full-scale engagement because these new platforms will deliver a meaningful advantage to Enterprise customers.
2. CLOUD COMPUTING FINDS ITS ENTERPRISE USE CASES
The hype around “Cloud Computing” is officially everywhere. My mom, who is in her sixties (sorry Mom) and just learned to text, recently asked me about Cloud Computing. Apparently she’s seen the commercials. In Enterprise IT circles and VC offices, there’s a lot of discussion around “Public” clouds vs. “Private” clouds; Infrastructure as a Service vs. Platforms as a Service; and the pros and cons of each. It’s all valuable theoretical debate, but people need to focus on the use cases and the specific economics of a particular “cloud” or platform configuration. As of right now, not every Enterprise IT use case fits the cloud model. In fact, most don’t. But there are three in particular that definitely do — application management, network and systems management and tier 2 and 3 storage. At Lightspeed, we’ve invested in a number of companies such as AppDynamics (www.appdynamics.com) and Cirtas (www.cirtas.com) which deliver solutions that are designed from the ground up to enable enterprise class customers to leverage the fundamental advantages of “Cloud Computing” – agility, leveraged resources, and a flexible cost model. Highly dynamic, distributed applications are being developed at an accelerating rate and represent an ideal use case for cloud environments when coupled with a solution like the one offered by AppDynamics which drives resource utilization based on application level demands. Similarly, Enterprise IT storage buyers have gotten smarter about tiering data among various levels of storage media, and infrequently accessed data is a great fit for cloud storage. Cloud controllers like the one offered by Cirtas enable enterprises to have the performance, security and reliability they are used to with traditional internal solutions but leverage the economics of the cloud.
3. VIRTUALIZING THE NETWORK
To date, the story of virtualization has been primarily about servers and storage. Tremendous innovation from VMware led the way for an entirely new set of companies to emerge in the data center infrastructure ecosystem. At Lightspeed, we talk about the fundamental pillars of the data center as application and systems management, servers, storage, and networking. Given all the advancement and activity around the first three, I think it’s about time the network caught up. As Enterprise IT continues to virtualize more of the data center and adopts cloud computing models (public or private), the network fundamentals are being forced to evolve as well. Networking solutions that decouple hardware from software are better aligned with the data center of the future. Companies such as Embrane (www.embrane.com) and Nicira Networks (www.nicira.com) are tackling this challenge head on and I believe 2011 will be the year where this fundamental segment of data center infrastructure starts to see meaningful momentum.
The Hottest Enterprise IT Trend You Have Never Heard of December 6, 2010Posted by Barry Eggers in datacenter, enterprise infrastructure, flash.
Ten years ago, the term virtualization was rarely uttered in an enterprise data center. Today, it’s part of the daily vocabulary. Server virtualization has changed the datacenter forever, shifting the proverbial IT bottleneck from computers to storage. There’s hardly an element of the enterprise infrastructure that hasn’t been impacted in some way by virtualization. Today, we all know more about virtualization than we care to admit, and VMWare’s stock has gone up and down and up again….
The Cloud is supposed to be the next big thing in IT – first brought to the public’s attention in 2007 in The Big Switch, by Nicholas Carr. Back then it had not yet make an indelible mark on the enterprise.
Three years later, the Cloud is already the centerpiece of ubiquitous primetime ads brought to you by the folks in Redmond. The Cloud has been over hyped, then under hyped (Gartner would call this the Trough of Disillusionment), then over hyped again. Certainly, the Cloud will have its day in the sun. But this post is not about the Cloud…
This post is about Flash. Not Adobe Flash. Not Flash sales. But Flash memory. Flash memory, the solid state technology that has mysteriously entered our lives through iPods, smartphones, tablets, laptops, and practically any mobile device, is about to invade the Enterprise Datacenter – in a very big way. While consumers have provided the insatiable demand for this technology that has driven its cost lower and lower, a group of savvy startups have been preparing this technology for large scale business deployment.
It’s the hottest trend you’ve never heard of, the virtualization of 10 years ago.
Why Flash memory? Why now? Because rotating hard disk drives (HDDs) are no longer good enough…..the last bastion of moving parts (i.e. “energy hogs”) in the datacenter has lived a good life, but like tape, its days are numbered. Disk drives are great for storing large amounts of data, and will continue to have a relevant place in the data center in that role. But now that the bottleneck has shifted away from servers, there will be a new focus on storage performance that will pave the way for Flash memory-based solutions.
But while performance is the reason for introducing Flash into the datacenter, COST will be the reason that it takes a significant “byte” out of the disk drive market. You see, a “hybrid” storage system that includes Flash memory for speed and inexpensive HDDs for capacity is both faster AND lower cost than a system comprised solely of expensive high performance HDDs. Hybrid systems are expected to become the “flavor of the decade” for storage during 2011.
Unlike virtualization, Flash memory in the enterprise will not grab your attention because of a single poster child (aka VMWare), but rather because of the depth and breadth of solutions that will pervade the market. There is an ecosystem building around enterprise flash memory that will enable continuous improvements in cost, performance, and reliability while providing a variety of product choices to end customers.
Companies like FusionIO*, OCZ, and Virident provide card-based solutions that plug into server slots. These solutions have a direct high speed path to the computer, allowing them to satisfy I/O intensive applications like database processing and web analytics.
Other companies like Pliant Technology* and STEC package Flash memory in the same form factor and with the same storage interface as HDDs, creating a solid state disk (SSD). The large storage and server OEMs will place SSDs into HDD slots, allowing them to quickly retrofit current products with this hot new memory technology.
There are also appliances from companies like Avere, Gridiron, and Kaminario that pack large capacities of Flash into a standalone system in order to accelerate capacity-hungry, mission-critical applications.
Other companies like Nimble Storage* take a “clean sheet of paper” approach to design a best in class “hybrid” system from the ground up, offering enterprise customers unprecedented functionality in a single system – raising the bar once again on the storage incumbents.
On the component side, companies like Sandforce, Anobit (also an SSD player), and Link-a-Media* are offering merchant silicon solutions for controlling and managing Flash. These pre-engineered building blocks could shorten development times for the next set of market entrants.
And, of course, there are several stealthy software companies that will engineer more performance, reliability, and availability into the solutions that gain the most favor with enterprise customers.
Over the next few years, billions of enterprise IT dollars will shift from rotating disks to Flash memory solutions. Several private companies will ride this wave to financial success. Meanwhile, the heavyweight storage and server OEMs, who have already recognized the disruptive nature of Flash memory, will reshape their product lines as they battle over market share. Given the scope and scale of solutions coming to market, and the insatiable enterprise demand for storage price/performance, it’s not crazy to predict that we will see the next $10B enterprise market develop in this area.
But don’t expect a Superbowl commercial for any of these companies anytime soon!
* A Lightspeed portfolio company
Top 5 trends for enterprise cloud computing in 2010 January 5, 2010Posted by ravimhatre in Cloud Computing, datacenter, enterprise infrastructure, Uncategorized, virtualization.
Tags: cloud, Cloud Computing, datacenter, enterprise IT, virtualization
by Ravi Mhatre, Lightspeed Venuter Partners
Lightspeed has invested across multiple enterprise infrastructure areas including database virtualization (Delphix), datacenter and cloud infrastructure (AppDynamics, Mulesoft, Optier) and storage virtualization (Fusion I/O, Pliant, Nimble).
This year we wanted to profile several important trends that we see emerging for Cloud Computing in 2010:
1. Enterprises move beyond experimentation with the cloud. Enterprises will start to deploy production cloud stacks with thousands of simultaneous VMs. They will increasingly be used as a resource for both pre-production and production workloads. CIOs and IT managers will test the benefits of creating and managing internal, elastic virtual datacenters – self-service, automated infrastructure with integrated and variable chargeback and billing capabilities, all built on commodity hardware.
2. Management software to deal with scaled cloud environments moves to the forefront. As infrastructure environments become increasingly dynamic and virtualized, the “virtual datacenter” or VDC will emerge as the new enterprise compute platform. New management platforms must be developed to apply policy and automation across thousands of transient servers, fluid underlying storage and networking resource pools, and variable workloads which often need to be dynamically migrated from one part of the VDC to another. Without new management tools, enterprises will fall short in their ability to achieve true “cloud economics” in their cloud environments.
3. Enterprise policy for dealing with public clouds starts to emerge. To counter the security and financial concerns around internal developers using public cloud providers such as Amazon on an ad hoc basis, CIOs and CFOs will start to craft their enterprises’ public cloud policies and centralize purchasing and procurement. Larger enterprises, due to security or compliance restrictions may initially prioritize internal private cloud development to recognize the benefits of cloud computing without compromising their data.
4. Public Clouds; “Its not just about Amazon”. Other mid and large-sized vendors (i.e. Microsoft, IBM, Rackspace, AT&T, Verizon, and others) continue to gain share in this rapidly growing market and 3’d party software matures which enables tier-2 and tier-3 service providers to get into the game of providing cloud services as a complement to traditional web and server hosting. EC2 becomes the commodity service offering as higher-end providers seek to differentiate their cloud offerings with SLA-based premium services and better management capabilities.
5. VMware has to rethink its business model. As Hyper-V,Xen and KVM continue to commoditize the hypervisor and gain enterprise market share, cloud computing starts to encroach on traditional ESX/vSphere use cases for application and server consolidation. Value continues to move up the stack into integrated management features and scale-out application support. To counter enterprise adoption of other hypervisor and cloud over-lay platforms, VMware will be forced to adjust pricing and licensing models to account for scale-out cloud deployments on top of hundreds or thousands of commodity servers.
Enterprise Infrastructure Predictions for 2009 December 5, 2008Posted by John Vrionis in Cloud Computing, datacenter, enterprise infrastructure, Uncategorized.
Barry Eggers and I teamed up again this year to make a few predictions about major trends to watch out for in Enterprise Infrastructure in 2009. But before we get into what we’re seeing in our crystal balls, we thought we should grade our 2008 Enterprise Infrastructure Predictions:
1. Flash-based storage makes a move towards the datacenter: A-
While 2008 was not “the year of the enterprise flash drive” as we suggested it might be, market momentum is clearly building. EMC and Sun announced enterprise storage offerings that incorporate flash drives. IBM and Dell have publicly declared their interests. Activity among private companies, including subsystems and systems companies, continues to increase.
2. Virtualization extends to the desktop: C
The big guys decided to supplement “make” with “buy” – MSFT bought Calista (a Lightspeed company) and Kidaro, VMWR purchased Thinstall. However, the market has been slower to develop than we initially predicted, and with big IT budgets constrained in 2009, we expect this market to slip into 2010 and beyond.
3. The Battle for the Top of the Rack (TOR) heats up: B+
CSCO and VMWR have decided to play nice for the time being, but there is a line of private companies that will battle CSCO in the near future, including high density 10G switching players like Arista (with a formidable team lead by ex-CSCO Jayshree Ullal and Andy B), Woven (lead by Ex-3COM exec Jeff Thurmond) and the I/O virtualization guys Aprius (a Lightspeed company), 3Leaf, and Xsigo. It’s still CSCO’s market to lose, but don’t count the private guys out.
And now, for the 2009 Enterprise Infrastructure Predictions:
It will, no doubt, be a challenging year for enterprise infrastructure, as with other sectors. The enterprise focus on green 2.0 (reduced energy usage) may be temporarily replaced by a focus on green 1.0 (as in money, reduced expenses, increased revenues, and short ROI periods). Despite the challenges, we do see some innovative ideas gaining traction:
1. Internal and external enterprise class clouds building momentum:
VMware is hyping its Datacenter OS and vCloud initiatives. IBM, MSFT, Sun, and HP have all indicated their enterprise class offerings will be ready for prime time in 2009. Expect increased marketing muscle touting key features – reliability, performance, security, SLAs – as differentiators (vs Amazon, Google and each other). We expect to see leading private companies emerge that are offering innovative software which enables enterprise customers to take the leap and benefit from the economic advantages of the enterprise cloud.
2. Hybrid Storage solutions gain mindshare with enterprise customers:
Given the growing trend of using flash storage in the Enterprise datacenter, expect to see an increase in innovative “Hybrid” solutions that combine flash storage with good old fashioned rotating disk drives. In these systems, the flash storage provides the “turbo” performance for apps that require it, while the rotating disks provide large amounts of inexpensive storage capacity for less demanding apps. Taken together, these hybrid systems aim to significantly reduce the total cost of storage while increasing performance and capacity.
3. The rise of serverless computing – two trends collide:
Hypervisors have made servers more efficient, allowing them to run multiple applications concurrently on the same system. In parallel, we have seen a monumental shift towards enterprise infrastructure apps, such as storage, security, and networking, running on standard servers (ie appliances) instead of proprietary hardware. Taking the two trends together, in 2009, we will see multiple appliances combined onto a common physical platform. More importantly, we will see enterprise infrastructure apps and compute apps combined into a common server platform within the datacenter. The computing vendors will view this as a way to offer and control enterprise applications. The enterprise application providers will view it as a way to do “serverless computing”. Either way, the customer wins. Less physical servers means lower upfront capex and lower TCO.
4. GPU computing starts getting serious attention (again):
Nvidia continues to develop and improve the interface to its GPUs which have hundreds of processing cores. The tantalizing possibilities for cost savings and application acceleration will drive further investigation into the possibilities of using GPUs for mainstream computing (despite previous hiccups from some venture backed companies). There are significant programming model obstacles to overcome, but we prefer to view that as the opportunity. Perhaps one of the Cloud providers will over GPU clusters as a high end service. What do you think Amazon?
1 comment so far
Anand Rajaraman, co-founder of Lightspeed portfolio company Kosmix, posts about how to stop email overload and break silos using wikis, blogs, and IM.
We hit the email wall at my company Kosmix recently. When we were less than 30 people, managing by email worked reasonably well. The team was small enough that everyone knew what everyone else was doing. Frequent hallway conversations reinforced relationships. However, once we crossed the 30-person mark, we noticed problems creeping in. We started hearing complaints of email overload and too many meetings. And despite the email overload and too many meetings, people still felt that there was a communication problem and a lack of visibility across teams and projects. We were straining the limits of email as the sole communications mechanism.
We knew something had to be done. But what? Sri Subramaniam, our head of engineering, proposed a bold restructuring of our internal communications. He led an effort that resulted in us relying less on email and more on wikis, blogs, and instant messaging. Here’s how we use these technologies everyday in running our business.
* Blogs for Status Reports
* The Wiki for Persistent Information
* Instant Messaging for Spontaneous Discussions
The effects of the communication restructuring have been immediate and very visible. They include a lot less email and almost none on weekends; better communication among people; and 360 degree visibility for every member of the Kosmix team. After we instituted these changes, everyone on the team feels more productive, more knowledgeable about the company, has more spare time to spend on things outside of work.
Anand goes into detail as to how blogs, wikis and IM are used by all employees, and how this has streamlined the communications in the company. I highly recommend reading the whole thing.
2008 Enterprise Infrastructure Predictions December 5, 2007Posted by jeremyliew in 2008, datacenter, enterprise infrastructure, flash, virtualization.
1. Flash-based storage makes a move towards the datacenter. The last bastion of moving parts in the datacenter – rotating disk drives – will start to feel the heat (no pun intended) as flash-based storage solutions make their way into the enterprise. For the last few decades, rotating disks have dominated enterprise storage the way legendary John Wooden’s teams dominated their collegiate foes. While solid state disk drives based on DRAM have been around since Wooden was carrying a lineup card in his hand, they have been relegated to niche performance-oriented applications, the equivalent of playing backup center to Lew Alcindor. But Flash memory could change all that. Flash-based storage, whose cost/GB is rapidly approaching magnetic disks, offers the additional benefits of 10X the performance, higher storage densities, and last but not least to datacenter enthusiasts, significantly lower power per I/O. All of this could propel SSD 2.0 into the mainstream. Sure, rotating disk drive companies will add flash memory caches and wave their magic marketing wands, but industry insiders will tell you it’s not enough. Innovation will come from a small group of companies that are solving the limitations of flash as it applies to enterprise users. Expect to see hybrid systems, based on a combination of flash and rotating disk, coming to a datacenter near you. Anyone for Green Storage?
2. Virtualization extends to the desktop. What is good for the server must be good for the desktop, right? Well, yes, but for a different reason. Server virtualization drives higher utilization on machines possessing an ever-increasing number of cores. Desktop Virtualization is not necessarily about utilization. Furthermore, desktop virtualization is not thin client 2.0. Thin Clients were about reducing up-front capital costs with a slimmed-down hardware client. Desktop Virtualization is about intelligently provisioning applications to desktop users. It’s about management, security, compliance, and reducing Total Cost of Ownership. Desktop Virtualization will also be more powerful to end users when used in conjunction with virtualized servers. The limitation with first generation virtualized desktops is that they offer a user experience much less satisfying than a full desktop, but that will begin to change in 2008…stay tuned for more details…
3. The Battle for the Top of the Rack (TOR) heats up. As server racks are populated with more cores per CPU and more VMs per core, memory and network I/O limitations will become priority concerns. How VMs share those physical resources will impact overall system performance and significantly influence the rate at which mission critical applications are run in virtualized environments. It’s a challenge most adopters of virtualization don’t deal with yet, but many vendors are working on solutions in anticipation of the time when well-utilized CPUs shift the datacenter bottleneck to memory and I/O. Whether the answer comes from software solutions that are internal to the server (advantage software companies) or additional hardware (perhaps a TOR solution) dedicated to managing network services and optimizing physical resource sharing (advantage system players), it represents a meaningful battle for a critical position in the next generation data center.
Later in the week, Cleantech.