Africa-focused technology, digital and innovation ecosystem insight and commentary.
…
continue reading
Tobias Macey द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री Tobias Macey या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Player FM - पॉडकास्ट ऐप
Player FM ऐप के साथ ऑफ़लाइन जाएं!
Player FM ऐप के साथ ऑफ़लाइन जाएं!
An Exploration Of The Impediments To Reusable Data Pipelines
MP3•एपिसोड होम
Manage episode 454407454 series 3449056
Tobias Macey द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री Tobias Macey या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Summary
In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.
Announcements
Parting Question
…
continue reading
In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
- Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelines
- Introduction
- How did you get involved in the area of data management?
- Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context?
- What are some ways that you think about what constitutes a "component" in this context?
- The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners?
- What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering?
- How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse?
- In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams?
- When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic?
- Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements?
- low/no code systems for code reuse
- impact of LLMs on reusability/composition
- impact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.)
- polymorphic data models (e.g. activity schema)
- What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities?
- What are your hopes and predictions for sharing of code and logic in the future of data engineering?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
- Max's Blog Post
- Airflow
- Superset
- Tableau
- Looker
- PowerBI
- Cohort Analysis
- NextJS
- Airbyte
- Fivetran
- Segment
- dbt
- SQLMesh
- Spark
- LAMP Stack
- PHP
- Relational Algebra
- Knowledge Graph
- Python Marshmallow
- Data Warehouse Lifecycle Toolkit (affiliate link)
- Entity Centric Data Modeling Blog Post
- Amplitude
- OSACon presentation
- ol-data-platform Tobias' team's data platform code
451 एपिसोडस
MP3•एपिसोड होम
Manage episode 454407454 series 3449056
Tobias Macey द्वारा प्रदान की गई सामग्री. एपिसोड, ग्राफिक्स और पॉडकास्ट विवरण सहित सभी पॉडकास्ट सामग्री Tobias Macey या उनके पॉडकास्ट प्लेटफ़ॉर्म पार्टनर द्वारा सीधे अपलोड और प्रदान की जाती है। यदि आपको लगता है कि कोई आपकी अनुमति के बिना आपके कॉपीराइट किए गए कार्य का उपयोग कर रहा है, तो आप यहां बताई गई प्रक्रिया का पालन कर सकते हैं https://hi.player.fm/legal।
Summary
In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.
Announcements
Parting Question
…
continue reading
In this episode of the Data Engineering Podcast the inimitable Max Beauchemin talks about reusability in data pipelines. The conversation explores the "write everything twice" problem, where similar pipelines are built without code reuse, and discusses the challenges of managing different SQL dialects and relational databases. Max also touches on the evolving role of data engineers, drawing parallels with front-end engineering, and suggests that generative AI could facilitate knowledge capture and distribution in data engineering. He encourages the community to share reference implementations and templates to foster collaboration and innovation, and expresses hopes for a future where code reuse becomes more prevalent.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
- Your host is Tobias Macey and today I'm joined again by Max Beauchemin to talk about the challenges of reusability in data pipelines
- Introduction
- How did you get involved in the area of data management?
- Can you start by sharing your current thesis on the opportunities and shortcomings of code and component reusability in the data context?
- What are some ways that you think about what constitutes a "component" in this context?
- The data ecosystem has arguably grown more varied and nuanced in recent years. At the same time, the number and maturity of tools has grown. What is your view on the current trend in productivity for data teams and practitioners?
- What do you see as the core impediments to building more reusable and general-purpose solutions in data engineering?
- How can we balance the actual needs of data consumers against their requests (whether well- or un-informed) to help increase our ability to better design our workflows for reuse?
- In data engineering there are two broad approaches; code-focused or SQL-focused pipelines. In principle one would think that code-focused environments would have better composability. What are you seeing as the realities in your personal experience and what you hear from other teams?
- When it comes to SQL dialects, dbt offers the option of Jinja macros, whereas SDF and SQLMesh offer automatic translation. There are also tools like PRQL and Malloy that aim to abstract away the underlying SQL. What are the tradeoffs across those options that help or hinder the portability of transformation logic?
- Which layers of the data stack/steps in the data journey do you see the greatest opportunity for improving the creation of more broadly usable abstractions/reusable elements?
- low/no code systems for code reuse
- impact of LLMs on reusability/composition
- impact of background on industry practices (e.g. DBAs, sysadmins, analysts vs. SWE, etc.)
- polymorphic data models (e.g. activity schema)
- What are the most interesting, innovative, or unexpected ways that you have seen teams address composability and reusability of data components?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on data-oriented tools and utilities?
- What are your hopes and predictions for sharing of code and logic in the future of data engineering?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
- Max's Blog Post
- Airflow
- Superset
- Tableau
- Looker
- PowerBI
- Cohort Analysis
- NextJS
- Airbyte
- Fivetran
- Segment
- dbt
- SQLMesh
- Spark
- LAMP Stack
- PHP
- Relational Algebra
- Knowledge Graph
- Python Marshmallow
- Data Warehouse Lifecycle Toolkit (affiliate link)
- Entity Centric Data Modeling Blog Post
- Amplitude
- OSACon presentation
- ol-data-platform Tobias' team's data platform code
451 एपिसोडस
सभी एपिसोड
×प्लेयर एफएम में आपका स्वागत है!
प्लेयर एफएम वेब को स्कैन कर रहा है उच्च गुणवत्ता वाले पॉडकास्ट आप के आनंद लेंने के लिए अभी। यह सबसे अच्छा पॉडकास्ट एप्प है और यह Android, iPhone और वेब पर काम करता है। उपकरणों में सदस्यता को सिंक करने के लिए साइनअप करें।