Sobre este vídeo
Hear from Brent Compton, Director, Storage Solution Architectures, Red Hat and Stephen O'Sullivan, VP Engineering, Silicon Valley Data Science in this breakout session at Red Hat Summit 2017.
In the beginning, there was MapReduce over HDFS analyzing clickstream data. Since the advent of modern big data, there's been an explosion of analytics frameworks—both inside and outside of the Hadoop ecosystem. To maximize the business benefit of these varied frameworks, data-driven enterprises have deployed many different specialized clusters, each with a transferred version of the data. This can result in latency, additional cost, and potential inconsistencies. On-demand provisioning of right-sized compute pools for Spark, Hive, Hadoop, and Kafka processing, using common object storage, matches the right storage and infrastructure technology to achieve faster business insights, frequently improving agility and reducing total cost. Silicon Valley Data Science, together with Red Hat describe practical examples of how and why large companies are adopting these emerging big data architectures, illustrating how Red Hat OpenStack and Ceph can be used for these patterns.
- Video Channel
- Tempo de execução
- 15 de maio de 2017