jump to navigation

The Future of Data Storage September 7, 2012

Posted by krishparikh in 2012, big data, enterprise 2.0, enterprise infrastructure, startups, Storage.
Tags: , , , , , ,
1 comment so far

In the so-called age of “big data”, enterprises will need to contend not only the sheer volume of data they generate (ranging from hundreds to thousands of terabytes), but also to manage the velocity and variety of these new data streams. (1) To put these numbers in perspective, imagine each enterprise storing and analyzing data equivalent to the volume of information catalogued by the US Library of Congress every year! (2)

Recognizing that this explosion of storage growth cannot be managed by legacy infrastructure, both investors and storage vendors are betting on flash memory as the technology to keep pace with the growing data challenges faced by enterprises.  Incumbents EMC and IBM have recently made strategic acquisitions in all-flash storage companies XtremIO and Texas Memory Systems to augment their legacy storage solutions. Meanwhile startups Pure Storage and Nutanix have raised large rounds of growth financing, further validating that investors are also bullish on the flash storage trend.

We at Lightspeed were early believers in the disruptive power of flash memory in next-generation storage systems. (3)  The decreasing cost of flash memory driven by widespread adoption in consumer devices, coupled with data access and retrieval times 10-100x faster than rotating disk, and a power and physical footprint 10 times smaller than disk well positioned flash to be the transformative storage technology in the datacenter.  Our early investments in component technologies (Link-a-Media, Pliant Technology, Fusion-io), systems companies (XtremIO), and software technologies (IO Turbine) centered around flash memory have validated that hypothesis.

To better understand the role of flash memory and its impact on performance, capacity, energy usage, and cost in next-generation storage systems, I invite you to join me at the Future of Data Storage event on September 18 in San Francisco.  Hosted by BTIG and moderated by Andrew Reichman, principal analyst with Forrester Research covering infrastructure and storage technologies, the event will bring together five leading companies focused on driving innovation around data storage in the enterprise:

  • Nimble Storage is creating hybrid storage systems that converge primary storage, backup storage, and data protection technology in a single appliance.
  • Nutanix is creating converged storage and compute appliances that allow enterprises to build Google-like, scale-out datacenters
  • Pure Storage is creating all-flash enterprise storage arrays focused on delivering high performance at cost effective price points.
  • Tintri is creating storage systems optimized for virtual machines, improving the manageability and cost-effectiveness of virtualized workloads.
  • Virident is creating PCIe flash accelerator cards that allow frequently used data to sit closer to the CPU in servers.

As we look toward the future, startups will continue to innovate around flash memory, creating next-generation storage systems stitched together with intelligent software to disrupt existing markets based on disk architectures.

If you are interested in joining us at the event please email eventRSVP@lsvp.com along with your name and contact information.  Webcasting will also be available.

I look forward to exploring these trends further during the Future of Data Storage event from the lens of five emerging startups – hope to see you there!

(1)    McKinsey Global Institute Report “Big data: The next frontier for innovation, competition, and productivity”

(2)    Library of Congress Website, January 2012 Data: As of January 2012, the Library has collected about 285 terabytes of web archive data growing at a rate of about 5 terabytes per month.

(3)    http://techcrunch.com/2012/07/13/lightspeed-ventures-positions-for-the-new-age-of-data-with-investments-in-storage-space/

Follow us on twitter at @lightspeedvp for more information on the future of storage and events like these.

Big Data + Machine Learning in Insurance June 4, 2012

Posted by jeremyliew in big data, financial services.
add a comment

I’ve posted in the past about how Big Data + Machine Learning is disrupting lending, and about how this disruption in financial services often comes from below, from startups targeting the unbanked. The Economist notes that big data + machine learning is changing underwriting at even big insurers:

At least two big American life insurers already waive medical exams for some prospective customers partly because marketing data suggest that they have healthy lifestyles, says Tim Hill of Milliman, a consultancy that advises insurers on data-mining software systems.

The software picks up clues that are unavailable in medical records. Recklessness in one part of someone’s life is a pretty good signal of risk appetite in others, for example. A prospective policyholder with numerous speeding tickets is more likely than a safer driver to end up with a sports injury. The software also detects obscure correlations. People who frequent ATMs so they can make cash payments tend to live longer than those who prefer writing cheques or paying with credit cards, it turns out. People with long commutes tend to die younger. Why this should be is not clear: some speculate that ATM users tend to be more spontaneous types, who like to have cash in their pocket and whose lifestyle may be more active; others hypothesise that sedentary commutes mean less time to do something healthy in the evening.

Interestingly, the advantage in using new sources of data to underwrite appears to lie more in cost reduction and speed to decision than accuracy:

But manual underwriting with medical tests can cost hundreds of dollars and, according to one estimate, drags on for an average of 42 days in America and Europe. That gives potential customers ample time to talk to a competitor or walk away. Automated underwriting can cost a tenth as much and be done once a human reviews the software’s recommendation.

Much of this is still in the anecdotal and experimental stage, but it is exciting to see that even big insurance companies can embrace new ideas.

 

 

 

More on data enabled underwriting April 30, 2012

Posted by jeremyliew in big data, financial services.
add a comment

Check out my guest post on data enabled underwriting at American Banker.

How Big Data is changing the lending industry February 27, 2012

Posted by jeremyliew in big data, financial services.
add a comment

I have a post on PandoDaily today about how big data +machine learning is reshaping lending and underwriting. I talk about some of the leading players in the space, including Wonga, Zestcash*, Klarna, and others, and why there is so much opportunity to create hugely disruptive companies in this space. Check it out!

* Zestcash is a Lightspeed portfolio company.