• How AI Happens

  • By: Sama
  • Podcast

How AI Happens

By: Sama
  • Summary

  • How AI Happens is a podcast featuring experts and practitioners explaining their work at the cutting edge of Artificial Intelligence. Tune in to hear AI Researchers, Data Scientists, ML Engineers, and the leaders of today’s most exciting AI companies explain the newest and most challenging facets of their field. Powered by Sama.
    2021 Sama, Inc
    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • Qualcomm Senior Director Siddhika Nevrekar
    Dec 16 2024

    Today we are joined by Siddhika Nevrekar, an experienced product leader passionate about solving complex problems in ML by bringing people and products together in an environment of trust. We unpack the state of free computing, the challenges of training AI models for edge, what Siddhika hopes to achieve in her role at Qualcomm, and her methods for solving common industry problems that developers face.

    Key Points From This Episode:

    • Siddhika Nevrekar walks us through her career pivot from cloud to edge computing.
    • Why she’s passionate about overcoming her fears and achieving the impossible.
    • Increasing compute on edge devices versus developing more efficient AI models.
    • Siddhika explains what makes Apple a truly unique company.
    • The original inspirations for edge computing and how the conversation has evolved.
    • Unpacking the current state of free computing and what may happen in the near future.
    • The challenges of training AI models for edge.
    • Exploring Siddhika’s role at Qualcomm and what she hopes to achieve.
    • Diving deeper into her process for achieving her goals.
    • Common industry challenges that developers are facing and her methods for solving them

    Quotes:

    “Ultimately, we are constrained with the size of the device. It’s all physics. How much can you compress a small little chip to do what hundreds and thousands of chips can do which you can stack up in a cloud? Can you actually replicate that experience on the device?” — @siddhika_

    “By the time I left Apple, we had 1000-plus [AI] models running on devices and 10,000 applications that were powered by AI on the device, exclusively on the device. Which means the model is entirely on the device and is not going into the cloud. To me, that was the realization that now the moment has arrived where something magical is going to start happening with AI and ML.” — @siddhika_

    Links Mentioned in Today’s Episode:

    Siddhika Nevrekar on LinkedIn

    Siddhika Nevrekar on X

    Qualcomm AI Hub

    How AI Happens

    Sama

    Show More Show Less
    33 mins
  • Block Developer Advocate Rizel Scarlett
    Dec 3 2024

    Today we are joined by Developer Advocate at Block, Rizel Scarlett, who is here to explain how to bridge the gap between the technical and non-technical aspects of a business. We also learn about AI hallucinations and how Rizel and Block approach this particular pain point, the burdens of responsibility of AI users, why it’s important to make AI tools accessible to all, and the ins and outs of G{Code} House – a learning community for Indigenous and women of color in tech. To end, Rizel explains what needs to be done to break down barriers to entry for the G{Code} population in tech, and she describes the ideal relationship between a developer advocate and the technical arm of a business.

    Key Points From This Episode:

    • Rizel Scarlett describes the role and responsibilities of a developer advocate.
    • Her role in getting others to understand how GitHub Copilot should be used.
    • Exploring her ongoing projects and current duties at Block.
    • How the conversation around AI copilot tools has shifted in the last 18 months.
    • The importance of objection handling and why companies must pay more attention to it.
    • AI hallucinations and Rizel’s advice for approaching this particular pain point.
    • Why “I don’t know” should be encouraged as a response from AI companions, not shunned.
    • Taking a closer look at how Block addresses AI hallucinations.
    • The burdens of responsibility of users of AI, and the need to democratize access to AI tools.
    • Unpacking G{Code} House and Rizel’s working relationship with this learning community.
    • Understanding what prevents Indigenous and women of color from having careers in tech.
    • The ideal relationship between a developer advocate and the technical arm of a business.

    Quotes:

    “Every company is embedding AI into their product someway somehow, so it’s being more embraced.” — @blackgirlbytes [0:11:37]

    “I always respect someone that’s like, ‘I don’t know, but this is the closest I can get to it.’” — @blackgirlbytes [0:15:25]

    “With AI tools, when you’re more specific, the results are more refined.” — @blackgirlbytes [0:16:29]

    Links Mentioned in Today’s Episode:

    Rizel Scarlett

    Rizel Scarlett on LinkedIn

    Rizel Scarlett on Instagram

    Rizel Scarlett on X

    Block

    Goose

    GitHub

    GitHub Copilot

    G{Code} House

    How AI Happens

    Sama

    Show More Show Less
    28 mins
  • dbt Labs Co-Founder Drew Banin
    Nov 21 2024



    Key Points From This Episode:

    • Drew and his co-founders’ background working together at RJ Metrics.
    • The lack of existing data solutions for Amazon Redshift and how they started dbt Labs.
    • Initial adoption of dbt Labs and why it was so well-received from the very beginning.
    • The concept of a semantic layer and how dbt Labs uses it in conjunction with LLMs.
    • Drew’s insights on a recent paper by Apple on the limitations of LLMs’ reasoning.
    • Unpacking examples where LLMs struggle with specific questions, like math problems.
    • The importance of thoughtful prompt engineering and application design with LLMs.
    • What is needed to maximize the utility of LLMs in enterprise settings.
    • How understanding the specific use case can help you get better results from LLMs.
    • What developers can do to constrain the search space and provide better output.
    • Why Drew believes prompt engineering will become less important for the average user.
    • The exciting potential of vector embeddings and the ongoing evolution of LLMs.

    Quotes:

    “Our observation was [that] there needs to be some sort of way to prepare and curate data sets inside of a cloud data warehouse. And there was nothing out there that could do that on [Amazon] Redshift, so we set out to build it.” — Drew Banin [0:02:18]

    “One of the things we're thinking a ton about today is how AI and the semantic layer intersect.” — Drew Banin [0:08:49]

    “I don't fundamentally think that LLMs are reasoning in the way that human beings reason.” — Drew Banin [0:15:36]

    “My belief is that prompt engineering will – become less important – over time for most use cases. I just think that there are enough people that are not well versed in this skill that the people building LLMs will work really hard to solve that problem.” — Drew Banin [0:23:06]

    Links Mentioned in Today’s Episode:

    Understanding the Limitations of Mathematical Reasoning in Large Language Models

    Drew Banin on LinkedIn

    dbt Labs

    How AI Happens

    Sama

    Show More Show Less
    28 mins

What listeners say about How AI Happens

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.