Speaker 1: Good morning. Welcome to our first ever OpenAI Dev.
Speaker 2: I just got back from the OpenAI Developer Day, they call it dev day. This is the first conference they held to try to attract developers, programmers to come and turn all that GPT AI goodness into services that you can use in the real world.
Speaker 1: We know that people want AI that is smarter, more personal, more customizable, can do more [00:00:30] on your behalf. Eventually, you’ll just ask a computer for what you need and it’ll do all of these tasks for you.
Speaker 2: I thought there were three pretty interesting announcements. The first was they announced GPTs, they call ’em. These are special purpose chatbots that you can create through a little build interface that they released today. So chat GPT. Most of us know it. It’s this general purpose AI chatbot. These GPTs are a specific purpose chat bot that you create. So [00:01:00] you tell it what you want to do, you upload some of your own data and then you give it a purpose.
Speaker 1: So today we’re taking our first small step that moves us towards this future. We’re thrilled to introduce gpt. GPTs are tailored versions of chat GPT for a specific purpose. To start. GPT builder asks me what I want to make and I’m going to say I want to help startup founders think through their business [00:01:30] ideas and get advice after the founder has gotten some advice, grill them on why they are not growing faster. Alright, so to start off, I just tell the GPTA little bit what I want here and it’s going to go off and start thinking about that. And it’s going to write some detailed instructions for the GPT. And you can see here [00:02:00] on the right in the preview mode that it’s already starting to fill out
Speaker 2: The chat GPT interface Today. It’s one size fits all and it’s very broad and very impressive technology, but this lets you specialize it to your particular needs to some degree. So I think that’s pretty interesting. It could appeal to a lot of people. You do not have to be a programmer to use it. It steps you through the process. It’s certainly very interesting technology, but there’s a long way to go between what we have today and the OpenAI vision, [00:02:30] which is to have just hundreds, thousands of these GPTs that people can cobble together to do lots of very specific things. We’re not anywhere close to that yet. The second thing I thought was really interesting is that they’re going to make an app store for these GPTs. So you can upload your own if you want. You can use other people’s GPT apps and of course you’ll get to pay for them,
Speaker 1: And we’ll be able to feature the best and the most popular gpt. Of course, we’ll make sure that GPTs in the store follow our policies [00:03:00] before they’re accessible. Revenue sharing is important to us. We’re going to pay people who build the most useful and the most used GPTs, A portion of our revenue.
Speaker 2: The people who develop the apps will get a cut of the revenue. So this is kind of the iPhone moment for OpenAI. Really. They have a platform that they’re now inviting developers to come and build stuff on top of and then sell stuff. The third thing I thought was interesting was they have GPT four turbo. [00:03:30] So the basic text AI system they have for everything they do at OpenAI is called GPT. The latest version was GPT four. Now we also have GPT four turbo and it can handle more complicated queries. So if you’re asking at something, you can be much more sophisticated up to 300 page text prompt if you’re into that kind of thing. And one of the nice things about it is it’s a little bit cheaper to run, which will be helpful for attracting those developers to build services on top of it.
Speaker 1: We are just as annoyed as all of you, probably [00:04:00] more that G PT Four’s knowledge about the world ended in 2021. We will try to never let it get that out of date. Again, GPT four Turbo has knowledge about the world up to April of 2023, and we will continue to improve that over time with our new text to speech model. You’ll be able to generate incredibly natural sounding audio from text in the API with six preset voices to choose from. I’ll play an example.
Speaker 3: Did you know that Alexander Graham Bell, the eminent inventor, was enchanted [00:04:30] by the world of sounds. His ingenious mind led to the creation of the Grapho phone, which etches sounds onto wax, making voices whisper through time.
Speaker 1: This is much more natural than anything else we’ve heard out there. Chat, GPT now uses GPT four turbo with all the latest improvements, including the latest knowledge cutoff, which will continue update. That’s all live today. It can now browse the web when it needs to write and run code, analyze data, take and generate images and much more.
Speaker 2: This is the base [00:05:00] level for everything that OpenAI does, so it’s important. Any little improvement helps quite a bit, but this is not so big that they’re calling it GPT five. They did talk a little bit about that. So right now, one of the most interesting things about G PT four is that it’s actually more cost effective. What’s important is that this is the next in a long series of incremental improvements and they haven’t stopped. So we can expect more improvements. And when you add those up over two years, three years, four years, it really is a big deal. So this isn’t going to put somebody out of a job that [00:05:30] had a job yesterday, this isn’t going to enable some breakthrough level of productivity that wasn’t available yesterday. But when you add this up over a few years, it is actually very important.