Google used its summer I/O event to unveiled Google Cloud Dataflow and described it as "significant step" towards a managed service model for data processing.
The firm now discusses availability of the Cloud Dataflow SDK as open-source to open up the integration gates. Google also hopes that this move will form the basis for porting Cloud Dataflow to other languages and execution environments.
"The value of data lies in analysis — and the intelligence one generates from it. Turning data into intelligence can be very challenging as data sets become large and distributed across disparate storage systems," Google reminds us.
Google Cloud Dataflow is now currently an alpha release, as a platform to democratize large scale data processing by enabling easier and more scalable access to data for data scientists, data analysts, and data-centric developers.
The firm promises that users can discover "meaningful results" from their data via simple and intuitive programing concepts, without the extra noise from managing distributed systems. So now with Cloud Dataflow SDK as open-source, another door opens.
According to Google, "We've learned a lot about how to turn data into intelligence as the original FlumeJava programming models (basis for Cloud Dataflow) have continued to evolve internally at Google. Why share this via open source? It's so that the developer community can spur future innovation in combining stream and batch based processing models."
Reusable programming patterns are a key enabler of developer efficiency — and here, the Cloud Dataflow SDK introduces a unified model for batch and stream data processing.
"Our approach to temporal-based aggregations provides a rich set of windowing primitives allowing the same computations to be used with batch or stream based data sources. We will continue to innovate on new programming primitives and welcome the community to participate in this process," reads the Google blog.