The Enterprise Flash Market is Taking Off May 10, 2012Posted by Barry Eggers in enterprise infrastructure, flash.
add a comment
Congratulations to Lightspeed portfolio company XtremIO and founders Ehud, Shuki, and Yaron for their acquisition which was announced today. This is another reminder of just how fast flash has gone from promising consumer technology to mainstream enterprise building block – primarily fueled by the proliferation of virtualization, BI, and Big Data coupled with the rapid decline in flash pricing.
We made our first investment in an enterprise flash company (Pliant Technology) in 2007; in 2008 we blogged about the potential for this technology to disrupt the storage market and in 2010 predicted that it could be the next $10 Billion dollar IT market.
Venture dollars have flowed into the enterprise flash market over the last few years and, to date, there have been 7 major liquidity events for venture-backed companies: FusionIO’s IPO (a Lightspeed portfolio company), Pliant (a Lightspeed portfolio company acquired by Sandisk), Sand Force (LSI), Anobit (Apple), IOTurbine (a Lightspeed portfolio company acquired by FusionIO), Flashsoft (Sandisk), and XtremeIO (a Lightspeed portfolio company). There are also a number of promising companies emerging in this market, including Violin Memory, Nimble Storage (a Lightspeed portfolio company), Pure Storage, Solidfire, Tintri (a Lightspeed portfolio company), Avere, Kaminario, and Nutanix (a Lightspeed portfolio company) as well as many others in development phases.
Despite all of the exit activity, we are in the early innings of this market – stay tuned for more fireworks…
The Hottest Enterprise IT Trend You Have Never Heard of December 6, 2010Posted by Barry Eggers in datacenter, enterprise infrastructure, flash.
Ten years ago, the term virtualization was rarely uttered in an enterprise data center. Today, it’s part of the daily vocabulary. Server virtualization has changed the datacenter forever, shifting the proverbial IT bottleneck from computers to storage. There’s hardly an element of the enterprise infrastructure that hasn’t been impacted in some way by virtualization. Today, we all know more about virtualization than we care to admit, and VMWare’s stock has gone up and down and up again….
The Cloud is supposed to be the next big thing in IT – first brought to the public’s attention in 2007 in The Big Switch, by Nicholas Carr. Back then it had not yet make an indelible mark on the enterprise.
Three years later, the Cloud is already the centerpiece of ubiquitous primetime ads brought to you by the folks in Redmond. The Cloud has been over hyped, then under hyped (Gartner would call this the Trough of Disillusionment), then over hyped again. Certainly, the Cloud will have its day in the sun. But this post is not about the Cloud…
This post is about Flash. Not Adobe Flash. Not Flash sales. But Flash memory. Flash memory, the solid state technology that has mysteriously entered our lives through iPods, smartphones, tablets, laptops, and practically any mobile device, is about to invade the Enterprise Datacenter – in a very big way. While consumers have provided the insatiable demand for this technology that has driven its cost lower and lower, a group of savvy startups have been preparing this technology for large scale business deployment.
It’s the hottest trend you’ve never heard of, the virtualization of 10 years ago.
Why Flash memory? Why now? Because rotating hard disk drives (HDDs) are no longer good enough…..the last bastion of moving parts (i.e. “energy hogs”) in the datacenter has lived a good life, but like tape, its days are numbered. Disk drives are great for storing large amounts of data, and will continue to have a relevant place in the data center in that role. But now that the bottleneck has shifted away from servers, there will be a new focus on storage performance that will pave the way for Flash memory-based solutions.
But while performance is the reason for introducing Flash into the datacenter, COST will be the reason that it takes a significant “byte” out of the disk drive market. You see, a “hybrid” storage system that includes Flash memory for speed and inexpensive HDDs for capacity is both faster AND lower cost than a system comprised solely of expensive high performance HDDs. Hybrid systems are expected to become the “flavor of the decade” for storage during 2011.
Unlike virtualization, Flash memory in the enterprise will not grab your attention because of a single poster child (aka VMWare), but rather because of the depth and breadth of solutions that will pervade the market. There is an ecosystem building around enterprise flash memory that will enable continuous improvements in cost, performance, and reliability while providing a variety of product choices to end customers.
Companies like FusionIO*, OCZ, and Virident provide card-based solutions that plug into server slots. These solutions have a direct high speed path to the computer, allowing them to satisfy I/O intensive applications like database processing and web analytics.
Other companies like Pliant Technology* and STEC package Flash memory in the same form factor and with the same storage interface as HDDs, creating a solid state disk (SSD). The large storage and server OEMs will place SSDs into HDD slots, allowing them to quickly retrofit current products with this hot new memory technology.
There are also appliances from companies like Avere, Gridiron, and Kaminario that pack large capacities of Flash into a standalone system in order to accelerate capacity-hungry, mission-critical applications.
Other companies like Nimble Storage* take a “clean sheet of paper” approach to design a best in class “hybrid” system from the ground up, offering enterprise customers unprecedented functionality in a single system – raising the bar once again on the storage incumbents.
On the component side, companies like Sandforce, Anobit (also an SSD player), and Link-a-Media* are offering merchant silicon solutions for controlling and managing Flash. These pre-engineered building blocks could shorten development times for the next set of market entrants.
And, of course, there are several stealthy software companies that will engineer more performance, reliability, and availability into the solutions that gain the most favor with enterprise customers.
Over the next few years, billions of enterprise IT dollars will shift from rotating disks to Flash memory solutions. Several private companies will ride this wave to financial success. Meanwhile, the heavyweight storage and server OEMs, who have already recognized the disruptive nature of Flash memory, will reshape their product lines as they battle over market share. Given the scope and scale of solutions coming to market, and the insatiable enterprise demand for storage price/performance, it’s not crazy to predict that we will see the next $10B enterprise market develop in this area.
But don’t expect a Superbowl commercial for any of these companies anytime soon!
* A Lightspeed portfolio company
1 comment so far
Dan Cook follows up his great post on how freemium beats advertising as a business model for flash games with a second great post on how you can get your players to pay you. After discussing the historical reasons that most flash games today are such lightweight affairs, he recommends the following checklist to see if you’re building enough value in your games to get users to pay you:
Quick value checklist
- Are you ignoring bad metrics like portal ratings?
- Are you measuring the holy triumvirate of value: fun, retention, money?
- Are you collecting real customer data?
- Does your game score 4 out of 5 on the fun scale?
- Do players return after a week?
- Is your game design amendable to high retention play?
- Are you iterating on your game and improving your game as measured by internal metrics? Have you figured out the big levers that affect player experience?
- Do you know when you are done? Do you know when you’ve reached the point where your game has proven value to your players?
- Are you willing to bail on the game if it doesn’t show signs of improvement?
Dan recommends measuring key drivers of value such as how much fun players have at various time points (by random survey), how often players return, and how much money you make from each player (on average). He then recommends making various game design changes, or a “kill the game decision” based on these metrics.
I strongly support the idea of using metrics to fine tune game play with real live players, in much the same way that web 2.0 used metrics to fine tune user behavior. This is best practice for many social games today – Siqi and David from Serious Business and Lil Green Patch gave a talk at the Social Gaming Summit about just this topic. I think another important metric is engagement (e.g. average time spent playing the game, including multiple sessions). I believe engagement is correlated with monetization – the deeper a player is engaged with a game, the more likely they will be willing to pay. I think that this may be a better measure than retention (although I’m open to debate on this point). In many free to play games, the bulk of the money is spent in the first spike of game play, so whether they continue to return or not may not be as important as how well you hook them in the first few days that they play the game, and how addicted you can get them.
This of course leads to questions of how you can build long term engagement, which Dan also has some suggestions for:
- Narrative, story, and cut scenes exhibit “rapid burnout”. In other words, player see them one or twice and then are bored when they see them again. Games that rely on such content have generally low retention metrics. You can mitigate this by releasing new narrative content on a regular basic to keep the product ‘fresh’, but this has a high cumulative cost.
- Linear levels or solvable puzzles also exhibit rapid burnout. Game systems that can be completed or conquered are usually one shot activities. You can layer additional challenges within each level, but often only expert players will be motivated to come back for a second play through.
- Some handcrafted content like text or static images can be refreshed cheaply: The type of handcrafted content you include makes a huge difference on the slope of your increasing costs. New text-based questions in a trivia game are relatively cheap compared to creating new God of War levels. An hour of text-based content is likely several orders of magnitude cheaper to build.
- Social content is low burnout: People will keep interacting with their friends for years. Mechanics that can tap into this often have very high retention rates. Anything that allows players to chat, share and form social identities in a community is pure gold.
- Grinding results in burnout, but it slows the process. Techniques like leveling or purchasing upgrades can dramatically increase the length of the game for very little development and design costs. Think of grinding as method of stretching, but not adding to your content. Grinding techniques only delay the inevitable. They can result in lower fun scores as people feel obligated to play, but aren’t enjoying the process of playing. Since you want people to fall in love, such a reaction can be counter productive to your goals.
- User generated content systems are low burnout: User generated content is ultimately a social system that encourages users to create consumable puzzles. The puzzles themselves may be short lived, but the community of creators can thrive for decades. This solves the problem of the linearly increasing cost of more handcrafted content by apply large numbers of people working for free.
- Algorithmic content has low burnout, but is hard to create and balance: Evergreen mechanics like Bejeweled or random map generation in Nethack keep people playing for hours. However, they are tricky to invent and balance.
An example of a high retention game is one like Puzzle Pirates that has social (avatar, chat, guilds), grinding (levels) and evergreen algorithmic content (puzzles). There is some light narrative in the form of periodic events and very little in the form of conquerable level design. Most games have a mix of all these various types of content and successful services almost always put a portion of their reoccurring revenue towards a steady trickle of low marginal cost handcrafted content. However, a high retention game designs tend to emphasize content with less burnout.
This would lead you to believe that (i) sandbox games (ii) user generated puzzle games and (iii) multi player games are well suited to driving long term player engagement without forcing costs to scale linearly. I’m inclined to agree.
I didn’t make it up to Casual Connect this year, so have been scanning the blog writeups. It sounds like Jim and Greg from Kongregate had a great session about some of the Fatal Flaws of Flash Game Design.
Adrian Crook notes that game plays are not as long-tail as expected:
- 1st game – 12m plays a month
- 2nd game – 10m
- 20th game – 2m
- 60th game – 1m
- Top 1% – 50% of playtime
- Top 10% of games – 90% of playtime
This won’t be a surprise for most game designers – it turns out that quality matters!
Greg from Kongregate posted his own notes, summarized as:
#1 You’re Making a Game, Not a Homework Assignment (i.e. start with the fun in the game)
#2 Ask the People Who Matter (i.e. get strangers to play test, not just friends)
#3 “Controls, Controls, You Must Learn Controls” (i.e. no one reads the instructions so make controls intuitive)
#4 Calling Your Game Art/Hardcore Is Not an Excuse (i.e. making your game difficult to use or play is not a good idea)
#5 Start from the Bottom Up, Not the Top Down (same as #1)
#6 Focus on Your Strengths, Not Your Weaknesses – Don’t Try to do Everything (i.e. make sequels, clone your own successes, and don’t try to be all things to all people)
#7 The Player Does What’s Efficient, Not What’s Fun (so make sure the efficient way to play to win is also fun)
#8 Show That You’re Human (i.e. funny is good, and don’t kill yourself on graphics)
#9 The Final 10% is Most Important (i.e. launch the game, not the beta). This last point is worth quoting more extensively from Greg:
When your game is technically done, there’s a tremendous urge to release it immediately. It’s like finishing a book report and not wanting to proofread it. It’s done! I can turn it in! I can be finished! The light is here!
But resist it. The final 10% of polish is by far the most efficient use of your time, even if it’s the most annoying and feels the least productive (since you’re changing things rather than building them).
But its importance cannot be understated. Maybe the boss on level 1 has too much health, and 40% of the people who play your game give up at that point. Five minutes of tweaking a health number could have been the best five minutes of time you ever spent in your entire life.
Don’t forget to add the little things! Having a mute button (separate for sound and music) and an intuitive save system will go a long way in making players like your game (or, more accurately, in preventing them from hating it).
Play your game. Play it again and again and again. Get others to play it. Get their feedback. Tweak, tweak, tweak. Continue polishing and ironing out bugs. Don’t be afraid to cut something out entirely if it’s not beneficial to the game – yeah, I know, you already put the work into it, but the player doesn’t care how much work you’ve put into it. If something is there that’s not fun, it simply shouldn’t be there. There is no advantage to your game being big and long purely for the sake of being big and long. Again, it’s not a homework assignment.
While I agree with this last part, I think that launching the game with good analytics built in will help you do this tweaking with real player feedback.
Erin Bell at Gamasutra has a couple more notes
Don’t Expect To Be Paid By The Hour
The Kongregate duo added: “Developers are shocked when they produce a game that they’ve been working on for four months and they only get a $1,000 or $2,000 sponsorship offer on it.”
“The thing is, no one really asked them to make this game. It’s something they did on their own, and reverse logic doesn’t really work when you try to break it down by the hour. It doesn’t matter how long you spent on the game, it’s the final product that matters.”
Don’t Equate Length With Value
A lot of developers feel like they need to have a long game, which makes sense if they’re trying to sell your game for $60 on a console, but not so much for a free Flash game, according to Kongregate.
The Several Journeys of Reemus series, for example, was a successful game on Kongregate, but most of the negative comments focused on its unnecessary length. The final level in particular, which was extremely repetitive, drove people crazy. When McClanahan asked the developer why he had made the final level so long, he said that the game would have been too short if he hadn’t.
McClanahan contrasted that example with You Have to Burn the Rope – a game that was one minute long to play, but has an average rating of 4.02 (out of 5) at Kongregate.
On this point, I hold more with Dan Cook’s view that the way to break out of the $1-2k/game mindset for Flash Games is by integrating virtual goods as the business model. Usually a players willingness to spend money on virtual goods is correlated with their level of engagement. This means short games (unless they are replayable) are unlikely to be able to get users to open their wallets. Equally, games that are long for the sake of being long (and lose the fun) are also unlikely to be able to get users to open their wallets as well. Long play sessions driven by fun are the most likely to be able to make the jump to a more lucrative virtual goods driven business model in my opinion.
Gaming business models: Freemium beats advertising July 7, 2009Posted by jeremyliew in advertising, business models, flash, game design, games.
Dan Cook has a great post about business models for flash game developers over at Lost Garden. He says:
Ads are a really crappy revenue sourceFor a recent game my friend Andre released, 2 million unique users yields around $650 from MochiAds. This yields an Average Revenue Per User (ARPU) of only $0.000325 per user. Even when you back in the money that sponsors will pay, I still only get an ARPU of $0.0028 per user. In comparison, a MMO like Puzzle Pirates makes about $0.21 per user that reaches the landing page (and $4.20 per user that registers)What this tells me is that other business models involving selling games on the Internet are several orders of magnitude more effective at making money from an equivalent number of customers. When your means of making money is 1/100th as efficient as money making techniques used by other developers, maybe you’ve found one big reason why developers starve when they make Flash games.
Ask for the money
When game developers ask for money, they are usually pleasantly surprised. Their customers give them money; in some cases, substantial amounts. I witnessed this early in my career making shareware games at Epic in the 90s and I’m seeing the same basic principles are in play with high end Flash games. Fantastic Contraption, for example, pulled in low 6 figures after only a few months on the market. That’s about 100x better than a typical flash game and in-line with many shareware or downloadable titles.
I think his conclusion is right not just for Flash game developers, but for all sorts of game developers, including MMOGs, iPhone games etc. dan runs through some steps that game developers should take to maximize their chances of being able to make a living from designing games, specific ideas about what to charge for, and responses to common objections to getting users to pay. For new or aspiring game designers, it is worth reading the whole thing.
Flash in the Datacenter? March 31, 2009Posted by Barry Eggers in flash, Uncategorized.
Tags: flash, Pliant
add a comment
Lightspeed portfolio company Pliant secured $15M in Series C funding earlier this month to bring its Enterprise Flash SSDs (EFDs) to Datacenter markets. While Pliant’s solution offers enterprise users the performance, reliability, and durability they can only dream of, there will need to be additional enabling technologies in order to propel SSDs into the datacenter mainstream.
The first wave of Flash SSDs solve performance problems – any IT administrator will tell you there are alot of these “hotspots” to fix, albeit they are regarded collectively as a fast-growing niche market.
The second wave is where SSDs will make their mark against rotating disks (the last bag o’ parts in the datacenter). This is the cost/performance wave. In this wave, Flash SSDs combined with cheap rotating storage will provide the capacity and performance of premium rotating storage solutions – at a fraction of the price. These “hybrid” systems represent a huge new opportunity in the datacenter (as discussed in our 2009 Enterprise IT predictions), and the large storage OEMs all know it.
But in order for this wave to happen, new file systems – ones that intelligently move data between performance resources and capacity resources – will need to be developed and deployed – more on that later…
2008 Enterprise Infrastructure Predictions December 5, 2007Posted by jeremyliew in 2008, datacenter, enterprise infrastructure, flash, virtualization.
1. Flash-based storage makes a move towards the datacenter. The last bastion of moving parts in the datacenter – rotating disk drives – will start to feel the heat (no pun intended) as flash-based storage solutions make their way into the enterprise. For the last few decades, rotating disks have dominated enterprise storage the way legendary John Wooden’s teams dominated their collegiate foes. While solid state disk drives based on DRAM have been around since Wooden was carrying a lineup card in his hand, they have been relegated to niche performance-oriented applications, the equivalent of playing backup center to Lew Alcindor. But Flash memory could change all that. Flash-based storage, whose cost/GB is rapidly approaching magnetic disks, offers the additional benefits of 10X the performance, higher storage densities, and last but not least to datacenter enthusiasts, significantly lower power per I/O. All of this could propel SSD 2.0 into the mainstream. Sure, rotating disk drive companies will add flash memory caches and wave their magic marketing wands, but industry insiders will tell you it’s not enough. Innovation will come from a small group of companies that are solving the limitations of flash as it applies to enterprise users. Expect to see hybrid systems, based on a combination of flash and rotating disk, coming to a datacenter near you. Anyone for Green Storage?
2. Virtualization extends to the desktop. What is good for the server must be good for the desktop, right? Well, yes, but for a different reason. Server virtualization drives higher utilization on machines possessing an ever-increasing number of cores. Desktop Virtualization is not necessarily about utilization. Furthermore, desktop virtualization is not thin client 2.0. Thin Clients were about reducing up-front capital costs with a slimmed-down hardware client. Desktop Virtualization is about intelligently provisioning applications to desktop users. It’s about management, security, compliance, and reducing Total Cost of Ownership. Desktop Virtualization will also be more powerful to end users when used in conjunction with virtualized servers. The limitation with first generation virtualized desktops is that they offer a user experience much less satisfying than a full desktop, but that will begin to change in 2008…stay tuned for more details…
3. The Battle for the Top of the Rack (TOR) heats up. As server racks are populated with more cores per CPU and more VMs per core, memory and network I/O limitations will become priority concerns. How VMs share those physical resources will impact overall system performance and significantly influence the rate at which mission critical applications are run in virtualized environments. It’s a challenge most adopters of virtualization don’t deal with yet, but many vendors are working on solutions in anticipation of the time when well-utilized CPUs shift the datacenter bottleneck to memory and I/O. Whether the answer comes from software solutions that are internal to the server (advantage software companies) or additional hardware (perhaps a TOR solution) dedicated to managing network services and optimizing physical resource sharing (advantage system players), it represents a meaningful battle for a critical position in the next generation data center.
Later in the week, Cleantech.