GPT2 1.5B


This week we discuss the release of the 1.5 billion parameter model of the GPT-2 neural network. Released by OpenAI, the GPT-2 is a language model trained to predict the next word in 40GB of Internet text. While it’s been around for some time in scaled-back forms, the release of the full model was previously considered “too dangerous” for the Internet. Why the change? Listen in as we discuss the implications of “big AI”, where the resources to train very large models are beyond what most individuals or even universities can afford. And what about the energy costs for all those GPUs/TPUs computing away a gradient descent for weeks or months on end? Should there be a “training shame” (utbildningskam in Swedish) for big, energy hungry models?

Creative AI Podcast Episode 16: GPT-2 1.5B

Listen and subscribe to our podcast here!

Read more about GPT-2 from OpenAI here.

Listen on Spotify

Listen on iTunes

Discuss this Article on Twitter


Creative AI Podcast 2020 Explainable AI