Session abstract:
When Nathan Marz coined the term Lambda Architecture back in 2012 he might have only been in search for a somewhat sensical title for his upcoming book. No doubt, the Lambda Architecture has since gained traction, functioning as a blueprint to build large-scale, distributed data processing systems in a flexible and extensible manner. But it also turns out that there is a sometimes overlooked aspect of the Lambda Architecture: human fault tolerance. Humans make mistakes. Machines don`t. Machines scale. Humans don`t.
By reviewing a number of real-world architectures of distributed applications from our customer and partner base I'm trying to come up with answers to the following questions:
- What Apache Hadoop eco-system components are useful for which layer in the Lambda Architecture?
- What is the impact on human fault tolerance when choosing certain components?
- Are there good practices available for using certain Apache Hadoop ecosystem components in the three-layered Lambda Architecture?