Episodes

  • Qualcomm Senior Director Siddhika Nevrekar
    Dec 16 2024

    Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.

    Key Points From This Episode:

    • Siddhika Nevrekar walks us through her career pivot from cloud to edge computing.
    • Why she’s passionate about overcoming her fears and achieving the impossible.
    • Increasing compute on edge devices versus developing more efficient AI models.
    • Siddhika explains what makes Apple a truly unique company.
    • The original inspirations for edge computing and how the conversation has evolved.
    • Unpacking the current state of free computing and what may happen in the near future.
    • The challenges of training AI models for edge.
    • Exploring Siddhika’s role at Qualcomm and what she hopes to achieve.
    • Diving deeper into her process for achieving her goals.
    • Common industry challenges that developers are facing and her methods for solving them

    Quotes:

    “Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_

    “By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_

    Links Mentioned in Today’s Episode:

    Siddhika Nevrekar on LinkedIn

    Siddhika Nevrekar on X

    Qualcomm AI Hub

    How AI Happens

    Sama

    Show More Show Less
    33 mins
  • Block Developer Advocate Rizel Scarlett
    Dec 3 2024

    Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business.

    Key Points From This Episode:

    • Rizel Scarlett describes the role and responsibilities of a developer advocate.
    • Her role in getting others to understand how GitHub Copilot should be used.
    • Exploring her ongoing projects and current duties at Block.
    • How the conversation around AI copilot tools has shifted in the last 18 months.
    • The importance of objection handling and why companies must pay more attention to it.
    • AI hallucinations and Rizel’s advice for approaching this particular pain point.
    • Why “I don’t know” should be encouraged as a response from AI companions, not shunned.
    • Taking a closer look at how Block addresses AI hallucinations.
    • The burdens of responsibility of users of AI, and the need to democratize access to AI tools.
    • Unpacking G{Code} House and Rizel’s working relationship with this learning community.
    • Understanding what prevents Indigenous and women of color from having careers in tech.
    • The ideal relationship between a developer advocate and the technical arm of a business.

    Quotes:

    “Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]

    “I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]

    “With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]

    Links Mentioned in Today’s Episode:

    Rizel Scarlett

    Rizel Scarlett on LinkedIn

    Rizel Scarlett on Instagram

    Rizel Scarlett on X

    Block

    Goose

    GitHub

    GitHub Copilot

    G{Code} House

    How AI Happens

    Sama

    Show More Show Less
    28 mins
  • dbt Labs Co-Founder Drew Banin
    Nov 21 2024



    Key Points From This Episode:

    • Drew and his co-founders’ background working together at RJ Metrics.
    • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
    • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
    • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
    • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
    • Unpacking examples where LLMs struggle with specific questions, like math problems.
    • The importance of thoughtful prompt engineering and application design with LLMs.
    • What is needed to maximize the utility of LLMs in enterprise settings.
    • How understanding the specific use case can help you get better results from LLMs.
    • What developers can do to constrain the search space and provide better output.
    • Why Drew believes prompt engineering will become less important for the average user.
    • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

    Quotes:

    “Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

    “One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

    “I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

    “My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

    Links Mentioned in Today’s Episode:

    Understanding the Limitations of Mathematical Reasoning in Large Language Models

    Drew Banin on LinkedIn

    dbt Labs

    How AI Happens

    Sama

    Show More Show Less
    28 mins
  • Saidot CEO Meeri Hataaja
    Oct 31 2024

    In this episode, you’ll hear about Meeri's incredible career, insights from the recent AI Pact conference she attended, her company's involvement, and how we can articulate the reality of holding companies accountable to AI governance practices. We discuss how to know if you have an AI problem, what makes third-party generative AI more risky, and so much more! Meeri even shares how she thinks the Use AI Act will impact AI companies and what companies can do to take stock of their risk factors and ensure that they are building responsibly. You don’t want to miss this one, so be sure to tune in now!

    Key Points From This Episode:

    • Insights from the AI Pact conference.
    • The reality of holding AI companies accountable.
    • What inspired her to start Saidot to offer solutions for AI transparency and accountability.
    • How Meeri assesses companies and their organizational culture.
    • What makes generative AI more risky than other forms of machine learning.
    • Reasons that use-related risks are the most common sources of AI risks.
    • Meeri’s thoughts on the impact of the Use AI Act in the EU.

    Quotes:

    “It’s best to work with companies who know that they already have a problem.” — @meerihaataja [0:09:58]

    “Third-party risks are way bigger in the context of [generative AI].” — @meerihaataja [0:14:22]

    “Use and use-context-related risks are the major source of risks.” — @meerihaataja [0:17:56]

    “Risk is fine if it’s on an acceptable level. That’s what governance seeks to do.” — @meerihaataja [0:21:17]

    Links Mentioned in Today’s Episode:

    Saidot

    Meeri Haataja on LinkedIn

    Meeri Haataja on Instagram

    Meeri Haataja on X

    How AI Happens

    Sama

    Show More Show Less
    25 mins
  • FICO Chief Analytics Officer Dr. Scott Zoldi
    Oct 18 2024

    In this episode, Dr. Zoldi offers insight into the transformative potential of blockchain for ensuring transparency in AI development, the critical need for explainability over mere predictive power, and how FICO maintains trust in its AI systems through rigorous model development standards. We also delve into the essential integration of data science and software engineering teams, emphasizing that collaboration from the outset is key to operationalizing AI effectively.


    Key Points From This Episode:

    • How Scott integrates his role as an inventor with his duties as FICO CAO.
    • Why he believes that mindshare is an essential leadership quality.
    • What sparked his interest in responsible AI as a physicist.
    • The shifting demographics of those who develop machine learning models.
    • Insight into the use of blockchain to advance responsible AI.
    • How FICO uses blockchain to ensure auditable ML decision-making.
    • Operationalizing AI and the typical mistakes companies make in the process.
    • The value of integrating data science and software engineering teams from the start.
    • A fear-free perspective on what Scott finds so uniquely exciting about AI.

    Quotes:

    “I have to stay ahead of where the industry is moving and plot out the directions for FICO in terms of where AI and machine learning is going – [Being an inventor is critical for] being effective as a chief analytics officer.” — @ScottZoldi [0:01:53]

    “[AI and machine learning] is software like any other type of software. It's just software that learns by itself and, therefore, we need [stricter] levels of control.” — @ScottZoldi [0:23:59]

    “Data scientists and AI scientists need to have partners in software engineering. That's probably the number one reason why [companies fail during the operationalization process].” — @ScottZoldi [0:29:02]

    Links Mentioned in Today’s Episode:

    FICO

    Dr. Scott Zoldi

    Dr. Scott Zoldi on LinkedIn

    Dr. Scott Zoldi on X

    FICO Falcon Fraud Manager

    How AI Happens

    Sama

    Show More Show Less
    34 mins
  • Lemurian Labs CEO Jay Dawani
    Oct 10 2024

    Jay breaks down the critical role of software optimizations and how they drive performance gains in AI, highlighting the importance of reducing inefficiencies in hardware. He also discusses the long-term vision for Lemurian Labs and the broader future of AI, pointing to the potential breakthroughs that could redefine industries and accelerate innovation, plus a whole lot more.

    Key Points From This Episode:

    • Jay’s diverse professional background and his attraction to solving unsolvable problems.
    • How his unfinished business in robotics led him to his current work at Lemurian Labs.
    • What he has learned from being CEO and the biggest obstacles he has had to overcome.
    • Why he believes engineers with a problem-solving mindset can be effective CEOs.
    • Lemurian Labs: making AI computing more efficient, affordable, and environmentally friendly.
    • The critical role of software in increasing AI efficiency.
    • Some of the biggest challenges in programming GPUs.
    • Why better software is needed to optimize the use of hardware.
    • Common inefficiencies in AI development and how to solve them.
    • Reflections on the future of Lemurian Labs and AI more broadly.

    Quotes:

    “Every single problem I've tried to pick up has been one that – most people have considered as being almost impossible. There’s something appealing about that.” — Jay Dawani [0:02:58]

    “No matter how good of an idea you put out into the world, most people don't have the motivation to go and solve it. You have to have an insane amount of belief and optimism that this problem is solvable, regardless of how much time it's going to take.” — Jay Dawani [0:07:14]

    “If the world's just betting on one company, then the amount of compute you can have available is pretty limited. But if there's a lot of different kinds of compute that are slightly optimized with different resources, making them accessible allows us to get there faster.” — Jay Dawani [0:19:36]

    “Basically what we're trying to do [at Lemurian Labs] is make it easy for programmers to get [the best] performance out of any hardware.” — Jay Dawani [0:20:57]

    Links Mentioned in Today’s Episode:

    Jay Dawani on LinkedIn

    Lemurian Labs

    How AI Happens

    Sama

    Show More Show Less
    29 mins
  • Intel VP & GM of Strategy & Execution Melissa Evers
    Sep 30 2024

    Melissa explains the importance of giving developers the choice of working with open source or proprietary options, experimenting with flexible application models, and choosing the size of your model according to the use case you have in mind. Discussing the democratization of technology, we explore common challenges in the context of AI including the potential of generative AI versus the challenge of its implementation, where true innovation lies, and what Melissa is most excited about seeing in the future.

    Key Points From This Episode:

    • An introduction to Melissa Evers, Vice President and General Manager of Strategy and Execution at Intel Corporation.
    • More on the communities she has played a leadership role in.
    • Why open source governance is not an oxymoron and why it is critical.
    • The hard work that goes on behind the scenes at open source.
    • What to strive for when building a healthy open source community.
    • Intel’s perspective on the importance of open source and open AI.
    • Enabling developer choices about open source or proprietary options.
    • Growing awareness around building architecture around the freedom of choice.
    • Identifying that a model is a bad choice or lacking in accuracy.
    • Thinking critically about future-proofing yourself with regard to model choice.
    • Opportunities for large and smaller models.
    • Finding the perfect intersection between value delivery, value creation, and cost.
    • Common challenges in the context of AI, including the potential of generative AI and its implementation.
    • Why there is such a commonality of use cases in the realm of generative AI.
    • Where true innovation and value lies even though there may be commonality in use cases.
    • Examples of creative uses of generative AI; retail, compound AI systems, manufacturing, and more.
    • Understanding that innovation in this area is still in its early development stages.
    • How Wardley Mapping can support an understanding of scale.
    • What she is most excited about for the future of AI: Rapid learning in healthcare.

    Quotes:

    “One of the things that is true about software in general is that the role that open source plays within the ecosystem has dramatically shifted and accelerated technology development at large.” — @melisevers [0:03:02]

    “It’s important for all citizens of the open source community, corporate or not, to understand and own their responsibilities with regard to the hard work of driving the technology forward.” — @melisevers [0:05:18]

    “We believe that innovation is best served when folks have the tools at their disposal on which to innovate.” — @melisevers [0:09:38]

    “I think the focus for open source broadly should be on the elements that are going to be commodified.” — @melisevers [0:25:04]

    Links Mentioned in Today’s Episode:

    Melissa Evers on LinkedIn

    Melissa Evers on X

    Intel Corporation

    Show More Show Less
    35 mins
  • Synopsys VP of AI Thomas Andersen
    Sep 27 2024

    VP of AI and ML at Synopsys, Thomas Andersen joins us to discuss designing AI chips. Tuning in, you’ll hear all about our guest’s illustrious career, how he became interested in technology, tech in East Germany, what it was like growing up there, and so much more! We delve into his company, Synopsys, and the chips they build before discussing his role in building algorithms.

    Key Points From This Episode:

    • A warm welcome to today’s guest, Thomas Andersen.
    • How he got into the tech world and his experience growing up in East Germany.
    • The cost of Compute AI coming down at the same time the demand is going up.
    • Thomas tells us about Synopsys and what goes into building their chips.
    • Other traditional software companies that are now designing their own AI chips.
    • What Thomas’ role looks like in machine learning and building AI algorithms.
    • How the constantly changing rules of AI chip design continue to create new obstacles.
    • Thomas tells us how they use reinforcement learning in their processes.
    • The different applications for generative AI and why it needs good input data.
    • Thomas’ advice for anyone wanting to get into the world of AI.

    Quotes:

    “It’s not really the technology that makes life great, it’s how you use it, and what you make of it.” — Thomas Andersen [0:07:31]

    “There is, of course, a lot of opportunities to use AI in chip design.” — Thomas Andersen [0:25:39]

    “Be bold, try as many new things [as you can, and] make sure you use the right approach for the right tasks.” — Thomas Andersen [0:40:09]

    Links Mentioned in Today’s Episode:

    Thomas Andersen on LinkedIn

    Synopsys

    How AI Happens

    Sama

    Show More Show Less
    42 mins