It’s faster, better, and more accurate, and it’s back to freak you out once more. No, this isn’t a bad horror movie sequel’s tagline. GPT-4 is the latest version of OpenAI’s artificial intelligence model.
When OpenAI launched ChatGPT in November 2022, it heralded the beginning of a new era in AI adoption. Suddenly, there was a free and widely available tool that allowed anyone to interact with generative AI and test its advanced capabilities — as well as its limitations. Let the chaos begin. Have you been living under a rock without internet access if you haven’t seen instances of ChatGPT being creepy or enabling nefarious behavior? That was before ChatGPT was powered by GPT-3.
GPT-4 is more accurate than its predecessor, better able to understand complex and nuanced requests, and intelligent enough to pass the bar exam in the 90th percentile. It is also multimodal, which means it can accept both images and text. GPT-4 is currently only available to ChatGPT Plus subscribers, but those willing to pay the extra $20 have already discovered what GPT-4 can do that GPT-3 couldn’t.
File lawsuits
GPT-4 can take in and process significantly more information than GPT-3. DoNotPay.com(Opens in a new tab) is already developing a method for using it to generate lawsuits against robocallers. Taking down scammers is a good thing in this case, but it shows that GPT-4 has the ability to generate a lawsuit for almost anything.
DoNotPay is working on using GPT-4 to generate "one click lawsuits" to sue robocallers for $1,500. Imagine receiving a call, clicking a button, call is transcribed and 1,000 word lawsuit is generated. GPT-3.5 was not good enough, but GPT-4 handles the job extremely well: pic.twitter.com/gplf79kaqG
— Joshua Browder (@jbrowder1) March 14, 2023
Automate your dating life
Sorting through dating app matches is a time-consuming but necessary task. Intense scrutiny is a critical component of determining someone’s potential that only you are aware of — until now. GPT-4 can automate this process by analyzing dating profiles and determining whether or not they are worth pursuing based on compatibility, as well as generating follow-up messages. Call us old-fashioned, but we believe that at least some aspects of dating should be left to humans.
How Keeper is using GPT-4 for matchmaking.
It takes profile data & preferences, determines if the match is worth pursuing & automates the followup.
With computer vision for the physical, you can filter on anything and find your ideal partner. pic.twitter.com/fdHj1LgUHo
— Jake Kozloski (@jakozloski) March 14, 2023
Build websites from almost nothing
BetaList founder Marc Kohlbrugge used a simple prompt to get GPT-4 to create an entire website from scratch. It didn’t just create a website; it essentially redesigned Nomad List, a popular website for remote workers. President and Co-Founder Greg Brockman uploaded an image of a handwritten note for a website in the OpenAI live demo of GPT-4. GPT-4 had created a working website based on the image of the piece of paper in less than a minute. GPT-4, in contrast to GPT-3, can handle image input and accurately “see” whatever the image is.
.@marckohlbrugge asked GPT-4 to make Nomad List and it worked pic.twitter.com/9ouRvVUe6M
— @levelsio (@levelsio) March 15, 2023
Show you all the jobs it could replace
Lawyers, developers, and even sommeliers are vulnerable. Do you want a list of jobs that GPT-4 can replace? GPT-4 can help you with that. Unfortunately, GPT-4 does not understand irony, or does it?
20 jobs that GPT-4 will replace, written by GPT-4: pic.twitter.com/MTcLHCidzH
— Rowan Cheung (@rowancheung) March 15, 2023
Convince a TaskRabbit worker to solve a CAPTCHA for it
To better understand the risks and safety challenges that GPT-4 can pose, OpenAI and the Alignment Research Center conducted research(Opens in a new tab) simulating scenarios in which GPT-4 could go awry. In one of those instances, GPT-4 tracked down a TaskRabbit employee and persuaded it to solve a CAPTCHA for it by claiming to be a blind person. This research was carried out so that OpenAI could tweak the model and provide safeguards to prevent something like this from happening again.
Really great to see pre-deployment AI risk evals like this starting to happen pic.twitter.com/gpm4oyN3rX
— Leopold Aschenbrenner (@leopoldasch) March 14, 2023