Comment Re:Splunk (Score 1) 37
What are some of the cost/performance metrics of Splunk when data gets large (common for game developers).
How does Splunk do on data sizes in the 500 Gig range? And how much does it cost?
What are some of the cost/performance metrics of Splunk when data gets large (common for game developers).
How does Splunk do on data sizes in the 500 Gig range? And how much does it cost?
First of all, this article isn't a comparison or matchup - it's just a speculative post by someone who has done very little research and obviously lacks domain knowledge in the space. There is no mention of use cases, data sizes, performance, costs.
Hadoop is an open-source framework for distributed data processing, specifically an implementation of the MapReduce framework. BigQuery is a hosted service that allows you to run queries over massive datasets via an API. There are tools built on top of Hadoop that allow for fast querying over large datasets (Impala), and there are even tools that are not Hadoop based that provide this as well (Spark + Shark). However, actually using these tools is a whole different game - the author makes so mention of how many nodes/VM are required to compare the query performance of BigQuery.
Then there's data sizes. The author makes a strange claim that BigQuery "queries don’t run instantly; one of the samples took 3.3 seconds to grind through 3.49 Gigabytes of data. But that’s clearly fine for quick lookups." Huhn? What tool(s) are you comparing against? BigQuery allows users to run full table aggregate ad-hoc queries over really really big datasets (i.e. terabytes). In public talks, Google has demonstrated that it is possible to run regular expression match queries, with sums and aggregations, over several terabytes of data in under a minute. In order to do this with a MapReduce-based system, what needs to be done - perhaps use something like Hive, or write a custom MapReduce function - and what is the performance in this case? For the same use case, what is the cost of using some of the "OLAP" tools that the author describes? Would love to see some benchmarks.
Re: "In the end, BigQuery is just another database."
Huhn? BigQuery is not a database at all - it doesn't support CRUD operations on data - rather it is an append-only analytics tool. And conversely, databases, relational or not, aren't really the right tools for full table scan ad-hoc queries over many terabytes, which is what BigQuery is designed to do. BigQuery is a developer's product, and one that can be integrated with existing web apps via RESTful API. Hadoop has it's own development role and story (and tools like Cascading are really great) but it's not designed as the backend for interaction via a RESTful API out of the box - it takes a bit more work to provide Hadoop as a service for developers to integrate with an application.
Re: "The public version of BigQuery probably isn't even used by Google, which likely has something bigger and better that we'll see in five years or so."
BigQuery is based on Google's internal Dremel, which is used everyday by Google. There is a very public research paper describing Dremel (much the same as how Google described MapReduce years ago). Read about what is available in Dremel versus what is available in BigQuery: http://research.google.com/pubs/pub36632.html
"You need tender loving care once a week - so that I can slap you into shape." - Ellyn Mustard