the Lightcone podcast</a>:</p><figure class=\"kg-card kg-embed-card\"><iframe width=\"200\" height=\"113\" src=https://www.ycombinator.com/"https://www.youtube.com/embed/fmI_OciHV_8?feature=oembed\%22 frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen title=\"Building AI Models Faster And Cheaper Than You Think\"></iframe></figure><p>We hope this list inspires more founders to realize that they have the ability to build their own models and advance the field of artificial intelligence in new directions.</p><p><a href=https://www.ycombinator.com/"http://atmo.ai/">Atmo: </strong>AI-powered meteorology for countries, militaries, and enterprises, promising weather predictions that are considerably more accurate yet cheaper to produce than the existing state of the art. </p><p><strong><a href=https://www.ycombinator.com/"https://canofsoup.com//">Can of Soup</strong></a>: </strong>Can of Soup is an app where you can use AI to create photos of you and your friends in imaginary situations. They built and launched the first model that can do this during the YC batch.</p><p><a href=https://www.ycombinator.com/"https://deepgram.com//">Deepgram: </strong>APIs for ultra-fast speech-to-text transcription and natural sounding text-to-speech. </p><p><a href=https://www.ycombinator.com/"https://www.diffuse.bio//">Diffuse Bio</strong></a><strong>: </strong>Building foundation models in biology that design new proteins for vaccines and therapeutics. </p><p><a href=https://www.ycombinator.com/"https://draftaid.io//">Draftaid: </strong>AI to help engineers and designers create CAD drawings, turning 3D models into the highly detailed fabrication drawings that manufacturers expect.</p><p><a href=https://www.ycombinator.com/"https://www.edgetrace.ai//">Edgetrace: </strong>Takes a huge video dataset and allows you to search through it in plain English. Example: digging through hours of traffic footage to find when a specific car appears with just its description (e.g. “red prius with a golden wheel cap turning right”). One of the founders worked on AI at Cruise; the other built drones for mapping construction sites. </p><p><a href=https://www.ycombinator.com/"https://www.ezdubs.ai//">EzDubs: Dubs videos into different languages in real-time while preserving the speaker’s voice. </p><p><a href=https://www.ycombinator.com/"https://exa.ai//">Exa: </strong>A search engine/API for AI and AI developers. Searches for things by meaning rather than keywords, allowing developers to run queries like “a short article about the early days of Google” or “news about the latest advancements in AI” and integrate the results into the answers their products give. </p><p><a href=https://www.ycombinator.com/"https://www.guidelabs.ai//">Guide Labs</strong></a><strong>:</strong> Typically foundation models are black boxes that cannot describe how they arrive at answers. They solve this with foundation models that are interpretable that can explain the reasoning behind their output, and clarify which parts of the training data and the prompt influenced that output. The team previously worked at Google Brain and Meta Research and were key developers of <a href=https://www.ycombinator.com/"https://github.com/pytorch/captum/">Captum.

Infinity AI</strong></a><strong>:</strong> </strong>Working on a “script-to-movie” model: you tell it what the on-screen characters say and do and it’ll generate a video accordingly. Their first product creates “talking-head” style clips from a provided script. </p><p><a href=https://www.ycombinator.com/"https://kscale.dev//">K-Scale: Building the infrastructure for enabling robotics foundation models and ultimately solving the problem of real-world embodied intelligence.</p><p><a href=https://www.ycombinator.com/"https://www.linum.ai//">Linum: </strong>Building models and tools that allow you to make animated videos from prompts. </p><p><a href=https://www.ycombinator.com/"https://www.metalware.io//">Metalware: AI tools to help firmware engineers build faster, like a specialized copilot for low-level programming or a PDF Reader that can crunch through a pile of data sheets and answer questions way faster than manual searching. The co-founders helped build the firmware for Starlink’s antennas.</p><p><a href=https://www.ycombinator.com/"https://www.navier.ai//">Navier AI</strong></a>: A physics-ML solver that can simulate computational fluid dynamics in real time, an essential need for aerospace and automotive engineering.</p><p><a href=https://www.ycombinator.com/"https://osium.ai//">Osium AI</strong></a><strong>:</strong> Helps R&amp;D engineers design new materials faster, using AI to predict the physical properties of a material and speed up otherwise arduous microscopic image analysis.</p><p><a href=https://www.ycombinator.com/"https://www.phind.com//">Phind: </strong>A conversational search engine built for developers, with a VS Code extension to tie it into your existing codebase. Ask it a question and it can generate an answer using your code as context. Stuck on an error/warning? It can offer up code to fix it. </p><p><a href=https://www.ycombinator.com/"https://piramidal.ai//">Piramidal: </strong>A foundation model for understanding brain activity, trained on a “colossal and diverse corpus” of brainwave data. Their first product is a copilot for neurologists evaluating potential epilepsy diagnoses. They’ve been able to train a large model with lower computational costs by reducing the memory footprint by dividing sequential EEG data into chunks.</p><p><a href=https://www.ycombinator.com/"https://playground.com//">Playground: </strong>A powerful AI-based image editor. Create new images from prompts, merge real/synthetic images into new pieces, or modify existing images with just a few words (like “make it winter”, or “give the boy a cape.”) </p><p><a href=https://www.ycombinator.com/"https://play.ht//">PlayHT: </strong>Highly expressive, AI-generated voices for media and content creators. Can be trained on a new voice with about 10 minutes of sample recordings. You can <a href=https://www.ycombinator.com/"https://www.youtube.com/watch?v=aL_hmxTLHiM\%22>hear some samples here</a>.</p><p><a href=https://www.ycombinator.com/"https://sevn.ai//">SevnAI: </strong>Building foundation models for graphic design. Current diffusion models output images that are hard to edit. They are able to generate SVGs that users can easily edit with a model that has a custom architecture for spatial reasoning.</p><p><a href=https://www.ycombinator.com/"https://sonauto.ai//">Sonauto: </strong>AI music creation. Give it lyrics, describe your song (e.g. “pop track that features vibrant synthesizers and an upbeat tempo”), and hit “Generate” — out pops a brand new tune. <a href=https://www.ycombinator.com/"https://twitter.com/snowmaker/status/1770686247146500177/">Here’s a power metal track</a> about YC that Jared Friedman generated with Sonauto.</p><p><a href=https://www.ycombinator.com/"https://synclabs.so//">Sync Labs</strong></a><strong>: </strong>They’ve built a model that lets you re-sync the lips of someone in a video to match up with new audio — allowing you to change the spoken language of a video in a way that looks natural, for example. They’re working towards doing this in real time for uses like live lip-synced translation in video calls. </p><p><a href=https://www.ycombinator.com/"https://www.tavus.io//">Tavus: </strong>Record one video, have it automatically personalized for each and every one of your viewers — swapping in the viewer’s name, company, etc where appropriate. The company recently released a <a href=https://www.ycombinator.com/"https://twitter.com/heytavus/status/1767536432682594349/">public beta of a tool</a> that lets you create a “human-like replica” of yourself with 2 minutes of footage.</p><p><a href=https://www.ycombinator.com/"https://www.yonedalabs.com//">Yoneda Labs</strong></a><strong>:</strong> Helps chemists figure out the best temperature, concentration, and catalysts to optimize their chemical reactions. </p><p><a href=https://www.ycombinator.com/"https://www.yonduai.com//">Yondu: </strong>Building foundation models for robots to autonomously navigate the world.<br></p>","comment_id":"66058a6beba13d0001fb3036","feature_image":"/blog/content/images/2024/03/AI.jpg","featured":true,"visibility":"public","email_recipient_filter":"none","created_at":"2024-03-28T08:19:07.000-07:00","updated_at":"2024-03-28T10:07:00.000-07:00","published_at":"2024-03-28T10:06:52.000-07:00","custom_excerpt":"These 25 YC companies have built AI models that do things that just a couple of years ago would have been impossible.","codeinjection_head":null,"codeinjection_foot":null,"custom_template":null,"canonical_url":null,"authors":[{"id":"66058c2beba13d0001fb303b","name":"Diana Hu","slug":"diana","profile_image":"/blog/content/images/2024/03/diana.jpeg","cover_image":null,"bio":"Diana Hu is a Group Partner at YC. She was co-founder/CTO of Escher Reality (YC S17), an AR Backend company acquired by Niantic (makers of Pokémon Go). At Niantic, she was the head of the AR platform.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/diana/"}],"tags":[{"id":"61fe29efc7139e0001a71172","name":"Video","slug":"video","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/video/"},{"id":"61fe29efc7139e0001a7118c","name":"AI","slug":"ai","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/ai/"}],"primary_author":{"id":"66058c2beba13d0001fb303b","name":"Diana Hu","slug":"diana","profile_image":"https://ghost.prod.ycinside.com/content/images/2024/03/diana.jpeg","cover_image":null,"bio":"Diana Hu is a Group Partner at YC. She was co-founder/CTO of Escher Reality (YC S17), an AR Backend company acquired by Niantic (makers of Pokémon Go). At Niantic, she was the head of the AR platform.","website":null,"location":null,"facebook":null,"twitter":null,"meta_title":null,"meta_description":null,"url":"https://ghost.prod.ycinside.com/author/diana/"},"primary_tag":{"id":"61fe29efc7139e0001a71172","name":"Video","slug":"video","description":null,"feature_image":null,"visibility":"public","og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"codeinjection_head":null,"codeinjection_foot":null,"canonical_url":null,"accent_color":null,"url":"https://ghost.prod.ycinside.com/tag/video/"},"url":"https://ghost.prod.ycinside.com/building-ai-models/","excerpt":"If you read articles about companies like OpenAI and Anthropic training foundation models, the press tends to focus on the huge amount of money and computation involved. It would be natural to assume that if you don’t have a billion dollars or the resources of a large company, you can’t build AI models of your own.","reading_time":4,"access":true,"og_image":null,"og_title":null,"og_description":null,"twitter_image":null,"twitter_title":null,"twitter_description":null,"meta_title":null,"meta_description":null,"email_subject":null,"frontmatter":null,"feature_image_alt":null,"feature_image_caption":null},{"id":"61fe29f1c7139e0001a71a7d","uuid":"de61cb6f-3e7e-43eb-94b4-e999d5c380ec","title":"Learning Math for Machine Learning","slug":"learning-math-for-machine-learning","html":"<!--kg-card-begin: html--><p><em>Vincent Chen is a student at Stanford University studying Computer Science. He is also a Research Assistant at the Stanford AI Lab.</em></p>\n<hr />\n<p>It’s not entirely clear what level of mathematics is necessary to get started in machine learning, especially for those who didn’t study math or statistics in school.</p>\n<p>In this piece, my goal is to suggest the mathematical background necessary to build products or conduct academic research in machine learning. These suggestions are derived from conversations with machine learning engineers, researchers, and educators, as well as my own experiences in both machine learning research and industry roles.</p>\n<p>To frame the math prerequisites, I first propose different mindsets and strategies for approaching your math education outside of traditional classroom settings. Then, I outline the specific backgrounds necessary for different kinds of machine learning work, as these subjects range from high school-level statistics and calculus to the latest developments in probabilistic graphical models (PGMs). By the end of the post, my hope is that you’ll have a sense of the math education you’ll need to be effective in your machine learning work, whatever that may be!</p>\n<p>To preface the piece, I acknowledge that learning styles/frameworks/resources are unique to a learner’s personal needs/goals— your opinions would be appreciated in <a href=https://www.ycombinator.com/"https://news.ycombinator.com/item?id=17664084\%22>the discussion on HN</a>!</p>\n<p><strong>A Note on Math Anxiety</strong><br />\nIt turns out that a lot of people — including engineers — are scared of math. To begin, I want to address the myth of “being good at math.”</p>\n<p>The truth is, people who are good at math have lots of practice doing math. As a result, they’re comfortable being stuck while doing math. A student’s mindset, as opposed to innate ability, is the primary predictor of one’s ability to learn math (as shown by <a href=https://www.ycombinator.com/"https://www.theatlantic.com/education/archive/2013/10/the-myth-of-im-bad-at-math/280914//">recent studies</a>).</p>\n<p>To be clear, it will take time and effort to achieve this state of comfort, but it’s certainly not something you’re born with. The rest of this post will help you figure out what level of mathematical foundation you need and outline strategies for building it.</p>\n<h2>Getting Started</h2>\n<p>As soft prerequisites, we assume basic comfortability with <a href=https://www.ycombinator.com/"http://cs229.stanford.edu/section/cs229-linalg.pdf/">linear algebra/matrix calculus</a> (so you don’t get stuck on notation) and introductory <a href=https://www.ycombinator.com/"http://cs229.stanford.edu/section/cs229-prob.pdf/">probability. We also encourage basic programming competency, which we support as a tool to learn math in context. Afterwards, you can fine-tune your focus based on the kind of work you’re excited about.</p>\n<p><strong>How to Learn Math Outside of School</strong> I believe the best way to learn math is as a full-time job (i.e. as a student). Outside of that environment, it’s likely that you won’t have the structure, (positive) peer pressure, and resources available in the academic classroom.</p>\n<p>To learn math outside of school, I’d recommend study groups or lunch and learn seminars as great resources for committed study. In research labs, this might come in the form of a reading group. Structure-wise, your group might walk through textbook chapters and discuss lectures on a regular basis while dedicating a Slack channel to asynchronous Q&amp;A.</p>\n<p>Culture plays a large role here — this kind of “additional” study should be encouraged and incentivized by management so that it doesn’t feel like it takes away from day-to-day deliverables. In fact, investing in peer-driven learning environments can make your long-term work more effective, despite short-term costs in time.</p>\n<p><strong>Math and Code</strong><br />\nMath and code are highly intertwined in machine learning workflows. Code is often built directly from mathematical intuition, and it even shares the syntax of mathematical notation. In fact, modern data science frameworks (e.g. <a href=https://www.ycombinator.com/"http://www.numpy.org//">NumPy) make it intuitive and efficient to translate mathematical operations (e.g. matrix/vector products) to readable code.</p>\n<p>I encourage you to embrace code as a way to solidify your learning. Both math and code depend on precision in understanding and notation. For instance, practicing the manual implementation of loss functions or optimization algorithms can be a great way to truly understanding the underlying concepts.</p>\n<p>As an example of learning math through code, let’s consider a practical example: implementing backpropagation for the ReLU activation in your neural network (<a href=https://www.ycombinator.com/"https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b/">yes, even if Tensorflow/PyTorch can do this for you!</a>). As a brief primer, backpropagation is a technique that relies on the chain rule from calculus to efficiently compute gradients. To utilize the chain rule in this setting, we multiply upstream derivatives by the gradient of ReLU.</p>\n<p>To begin, we visualize the <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Rectifier_(neural_networks)/">ReLU activation, defined:</p>\n<p><a href=https://www.ycombinator.com/"https://ghost.prod.ycinside.com/content/images/wordpress/2018/08/Math-for-ML-1.png/">\"Math\"Mathhere.

/n

Math for Building Machine Learning Products</h2>\n<p>To inform this section, I spoke to machine learning engineers to figure out where math was most helpful in debugging their systems. The following are examples of questions that engineers found themselves answering with mathematical insights. If you haven’t seen them, no worries— the hope is that this section will provide some context into specific kinds of questions you might find yourself answering!</p>\n<ul>\n<li>What clustering method should I use to visualize my high-dimensional customer data?<br />\n○ Approach: <a href=https://www.ycombinator.com/"https://stats.stackexchange.com/questions/238538/are-there-cases-where-pca-is-more-suitable-than-t-sne/">PCA vs. tSNE</a> </li>\n<li>How should I calibrate a threshold (e.g. confidence-level 0.9 vs. 0.8?) for “blocking” fraudulent user transactions?<br />\n○ Approach: <a href=https://www.ycombinator.com/"http://scikit-learn.org/stable/modules/calibration.html/">Probability calibration</a> </li>\n<li>What’s the best way to characterize the bias of my satellite data to specific regions of the world (Silicon Valley vs. Alaska)?<br />\n○ Approach: Open research question. Perhaps, aim for demographic <a href=https://www.ycombinator.com/"http://blog.mrtz.org/2016/09/06/approaching-fairness.html/">parity?\n\n

Generally, statistics and linear algebra can be employed in some way for each of these questions. However, to arrive at satisfactory answers often requires a domain-specific approach. If that’s the case, how do you narrow down the kind of math you need to learn?</p>\n<p><strong>Define Your System</strong><br />\nThere is no shortage of resources (e.g. <a href=https://www.ycombinator.com/"http://scikit-learn.org/stable//">scikit-learn for data analysis, <a href=https://www.ycombinator.com/"https://keras.io//">keras for deep learning) that will help you jump into writing code to model your systems. In doing so, try to answer the following questions about the pipeline you need to build:</p>\n<ol>\n<li>What are the inputs/outputs of your system? </li>\n<li>How should you prepare your data to fit your system? </li>\n<li>How can you construct features or curate data to help your model generalize? </li>\n<li>How do you define a reasonable objective for your problem?</li>\n</ol>\n<p>You’d be surprised — defining your system can be hard! Afterwards, the engineering required for pipeline-building is also non-trivial. In other words, building machine learning products requires significant amounts of heavy lifting that don’t require a deep mathematical background.</p>\n<p><strong>Resources</strong><br />\n• <a href=https://www.ycombinator.com/"https://developers.google.com/machine-learning/guides/rules-of-ml//">Best Practices for ML Engineering</a> by Martin Zinkevich, Research Scientist at Google</p>\n<p><strong>Learning Math as You Need It</strong><br />\nDiving headfirst into a machine learning workflow, you might find that there are some steps that you get stuck at, especially while debugging. When you’re stuck, do you know what to look up? How reasonable are your weights? Why isn’t your model converging with a particular loss definition? What’s the right way to measure success? At this point, it may be helpful to make assumptions about the data, constrain your optimization differently, or try different algorithms.</p>\n<p>Often, you’ll find that there’s mathematical intuition baked into the modeling/debugging process (e.g. selecting loss functions or evaluation metrics) that could be instrumental to making informed, engineering decisions. These are your opportunities to learn!</p>\n<p>Rachel Thomas from <a href=https://www.ycombinator.com/"http://www.fast.ai//">Fast.ai is a proponent of this “on-demand” method — while educating students, she found that it was more important for her deep learning students to get far enough to become excited about the material. Afterwards, their math education involved filling in the holes, on-demand.</p>\n<p><strong>Resources</strong><br />\n• Course: <a href=https://www.ycombinator.com/"http://www.fast.ai/2017/07/17/num-lin-alg//">Computational Linear Algebra</a> by fast.ai<br />\n• YouTube: <a href=https://www.ycombinator.com/"https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/">3blue1brown: Essence of <a href=https://www.ycombinator.com/"https://www.youtube.com/watch?v=kjBOesZCoqc&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab\%22>Linear Algebra</a> and <a href=https://www.ycombinator.com/"https://www.youtube.com/watch?v=WUvTyaaNkzM&amp;list=PLZHQObOWTQDMsr9K-rj53DwVRMYO3t5Yr\%22>Calculus
Linear Algebra Done Right</a> by Axler<br />\n• Textbook: <a href=https://www.ycombinator.com/"https://web.stanford.edu/~hastie/ElemStatLearn//">Elements of Statistical Learning</a> by Tibshirani et al.<br />\n• Course: <a href=https://www.ycombinator.com/"http://cs229.stanford.edu/syllabus.html#opt\">Stanford’s CS229 (Machine Learning) Course Notes</a></p>\n<h2>Math for Machine Learning Research</h2>\n<p>I now want to characterize the type of mathematical mindset that is useful for research-oriented work in machine learning. The cynical view of machine learning research points to plug-and-play systems where more compute is thrown at models to squeeze out higher performance. In some circles, <a href=https://www.ycombinator.com/"https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf/">researchers remain skeptical</a> that empirical methods lacking in mathematical rigor (e.g. certain deep learning methods) can carry us to the holy grail of human-level intelligence.</p>\n<p>It’s concerning that the research world might be building on existing systems and assumptions that don’t extend our fundamental understanding of the field. Researchers need to contribute primitives— new, foundational building blocks that can be used to derive entirely new insights and approaches to goals in the field. For instance, this might mean rethinking building blocks like Convolutional Neural Networks for image classification, as Geoff Hinton, “the <a href=https://www.ycombinator.com/"https://en.wikipedia.org/wiki/Geoffrey_Hinton/">Godfather of Deep Learning,” does in his recent Capsule Networks <a href=https://www.ycombinator.com/"https://arxiv.org/pdf/1710.09829v1.pdf/">paper.

/n

To make the next leaps in machine learning, we need to ask fundamental questions. This requires a deep mathematical maturity, which Michael Nielsen, author of the Deep Learning book, described to me as “playful exploration.” This process involves thousands of hours of being “stuck”, asking questions, and flipping problems over in pursuit of new perspectives. “Playful exploration” allows scientists to ask deep, insightful questions, beyond the combination of straightforward ideas/architectures.</p>\n<p>To state the obvious— in ML research, it is still impossible to learn <em>everything</em>! To properly engage in “playful exploration” requires that you follow your interest, rather than worrying about the hottest new result.</p>\n<p>ML research is an incredibly rich field of study with pressing problems in fairness, interpretability, and accessibility. As true across all scientific disciplines, fundamental thinking is not an on-demand process— it takes patience to be able to think with the breadth of high-level mathematical frameworks required for critical problem solving.</p>\n<p><strong>Resources</strong><br />\n• Blog: <a href=https://www.ycombinator.com/"https://www.maa.org/external_archive/devlin/devlin_10_00.html/">Do SWEs need mathematics?</a> by Keith Devlin<br />\n• Reddit Thread: <a href=https://www.ycombinator.com/"https://www.reddit.com/r/MachineLearning/comments/73n9pm/d_confession_as_an_ai_researcher_seeking_advice//">Confessions of an AI Researcher</a><br />\n• Blog: <a href=https://www.ycombinator.com/"http://www.people.vcu.edu/~dcranston/490/handouts/math-read.html/">How to Read Mathematics</a> by Shai Simonson and Fernando Gouvea<br />\n• Papers: <a href=https://www.ycombinator.com/"https://papers.nips.cc/book/advances-in-neural-information-processing-systems-30-2017/">NIPS and <a href=https://www.ycombinator.com/"http://proceedings.mlr.press/v70//">ICML recent conference papers<br />\n• Essay: <a href=https://www.ycombinator.com/"https://www.maa.org/external_archive/devlin/LockhartsLament.pdf/">A Mathematician’s Lament</a> by Paul Lockhart<sup id=\"footnoteid1\"><a href=https://www.ycombinator.com/"#footnote1\">1</a></sup></p>\n<p><strong>Democratizing Machine Learning Research</strong><br />\nI hope that I haven’t painted “research math” as too esoteric, because the ideas formulated using math should be presented in intuitive forms! Sadly, many machine learning papers are still <a href=https://www.ycombinator.com/"https://arxiv.org/abs/1807.03341/">rife with complex and inconsistent terminology</a>, leaving key intuition difficult to discern. As a student, you can do yourself and the field a great service by attempting to translate dense papers into consumable chunks of intuition, via blog posts, tweets, etc. You might even take examples from <a href=https://www.ycombinator.com/"http://distill.pub/">distill.pub as an example of a publication focused on offering clear explanations of machine learning research. In other words, take the demystification of technical ideas as a means towards “playful exploration”— your learning (and machine learning Twitter) will thank you for it!</p>\n<h2>Takeaways</h2>\n<p>In conclusion, I hope that I’ve provided a starting point for you to think about your math education for machine learning.</p>\n<ul>\n<li>Different problems require different levels of intuition, and I would encourage you to figure out what your objectives are in the first place. </li>\n<li>If you’re hoping to build products, seek peers and study groups through problems and motivate your learning by diving into the end-goal. </li>\n<li>In the research world, broad mathematical foundations can give you the tools to push the field forward by contributing new, fundamental building blocks. </li>\n<li>In general, math (especially in research paper form) can be intimidating, but getting stuck is a huge part of the learning process. </li>\n</ul>\n<p>Good luck!</p>\n<p><strong>Notes</strong><br />\n<b id=\"footnote1\">1.</b> A rather pointed criticism about math education that details “playful exploration.” But I suspect that Lockhart would disagree with the thesis of this post &#8212; that math should be used for anything <em>but</em> fun!.<a href=https://www.ycombinator.com/"http://cs229.stanford.edu/section/cs229-prob.pdf/">↩

/n
Miles Brundage</a> is an AI Policy Research Fellow with the Strategic AI Research Center at the <a href=https://www.ycombinator.com/"https://www.fhi.ox.ac.uk//">Future of Humanity Institute</a>. He is also a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University.</p>\n<p>Miles recently co-authored <a href=https://www.ycombinator.com/"https://arxiv.org/abs/1802.07228/">The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation</a>.</p>\n<p><a href=https://www.ycombinator.com/"https://twitter.com/timhwang/">Tim Hwang</a> is the Director of the <a href=https://www.ycombinator.com/"https://techcrunch.com/2017/01/10/omidyar-hoffman-create-27m-research-fund-for-ai-in-the-public-interest//">Harvard-MIT Ethics and Governance of AI Initiative</a>. He is also a Visiting Associate at the <a href=https://www.ycombinator.com/"https://www.oii.ox.ac.uk//">Oxford Internet Institute</a>, and a Fellow at the <a href=https://www.ycombinator.com/"https://pacscenter.stanford.edu/research/project-on-democracy-and-the-internet//">Knight-Stanford Project on Democracy and the Internet</a>. This is Tim&#8217;s second time on the podcast; <a href=https://www.ycombinator.com/"https://blog.ycombinator.com/at-the-intersection-of-ai-governments-and-google-tim-hwang//">he was also on episode 11</a>.</p>\n<hr />\n<div id=\"backtracks-player\" data-bt-embed=\"https://player.backtracks.fm/ycombinator/ycombinator/m/72-miles-brundage-and-tim-hwang\" data-bt-theme=\"orange\" data-bt-show-art-cover=\"true\" data-bt-show-comments=\"true\">\n</div>\n<p><script>(function(p,l,a,y,e,r,s){if(p[y]) return;if(p[e]) return p[e]();s=l.createElement(a);l.head.appendChild((s.async=p[y]=true,s.src=r,s))}(window,document,\"script\",\"__btL\",\"__btR\",\"https://player.backtracks.fm/embedder.js\"))</script></p>\n<p><script>\n(function(p,l,a,y,e,r,s){if(p[y]) return;\nif(p[e]) return p[e]();s=l.createElement(a);\nl.head.appendChild((s.async=p[y]=true,s.src=r,s))\n}(window,document,'script','__btL','__btR',\n'https://player.backtracks.fm/embedder.js'))\n</script></p>\n<p><script>\n!function(n,i,s,c){n[s]||(n[s]=function(i){n[s].q.push(i)}),n[s].q||(n[s].q=[]),\nc=i.createElement(\"script\"),\nc.async=1,\nc.src=\"https://c.bktrks.com/utils-1.0.0.all.min.js\",\ni.head.appendChild(c)}(window,document,\"BTUtils\");\nBTUtils(function(use) {\n var options = {\n autoplayLinks: false\n };\n use('backtracks-autolink', options).init();\n});\n</script></p>\n<hr />\n<h1>Subscribe</h1>\n<p><a href=https://www.ycombinator.com/"https://itunes.apple.com/us/podcast/y-combinator/id1236907421/">iTunes
Breaker
Google Play</a><br />\n<a href=https://www.ycombinator.com/"http://www.stitcher.com/podcast/y-combinator/">Stitcher
SoundCloud
RSS

/n

/n(AI)

All Posts

How Adversarial Attacks Work

by Emil Mikhailov11/2/2017

Emil Mikhailov is the founder of XIX.ai (YC W17). Roman Trusov is a researcher at XIX.ai.

Baidu's AI Lab Director on Advancing Speech Recognition and Simulation

by Y Combinator8/11/2017

Adam Coates is the Director of Baidu’s Silicon Valley AI Lab.

Jeff Dean’s Lecture for YC AI

by Y Combinator8/7/2017

Jeff Dean is a Google Senior Fellow in the Research Group, where he leads the Google Brain project.

Ex Machina's Scientific Advisor - Murray Shanahan

by Y Combinator6/28/2017

Murray Shanahan was one of the scientific advisors on Ex Machina.

YC AI

by Daniel Gross3/19/2017

Some think the excitement around Artificial Intelligence is overhyped. They might be right. But if they’re wrong, we’re on the precipice of something really big. We can’t afford to ignore what might be the biggest technological leap since the Internet.