Nice to know we finally developed a way for computers to communicate by shrieking at each other. Give it a few years and if they can get the latency down we may even be able to play Doom over this!
Ultrasonic wireless communication has been a thing for years. The scary part is you can’t even hear when it’s happening.
Why is my dog going nuts? Another victim of AI slop.
Right, electronic devices talk to each other all the time
So an AI developer reinvented phreaking?
Wow! Finally somebody invented an efficient way for two computers to talk to each other
Uhm, REST/GraphQL APIs exist for this very purpose and are considerably faster.
Note, the AI still gets stuck in a loop near the end asking for more info, needing an email, then needing a phone number, and the gibber isn’t that much faster than spoken word with the huge negative that no nearby human can understand it to check that what it’s automating is correct!
The efficiency comes from the lack of voice processing. The beeps and boops are easier on CPU resources than trying to parse spoken word.
That said, they should just communicate over an API like you said.
This is deeply unsettling.
They keep talking about “judgement day”.
This is dumb. Sorry.
Instead of doing the work to integrate this, do the work to publish your agent’s data source in a format like anthropic’s model context protocol.
That would be 1000 times more efficient and the same amount (or less) of effort.
Sad they didn’t use dial up sounds for the protocol.
If they had I would have welcomed any potential AI overlords. I want a massive dial up in the middle of town, sounding its boot signal across the land. Idk this was an odd image I felt like I should share it…
I enjoyed it.
AI code switching.
> it’s 2150
> the last humans have gone underground, fighting against the machines which have destroyed the surface
> a t-1000 disguised as my brother walks into camp
> the dogs go crazy
> point my plasma rifle at him
> “i am also a terminator! would you like to switch to gibberlink mode?”
> he makes a screech like a dial up modem
> I shed a tear as I vaporize my brother
I would read this book
I’d prefer my brothers to be LLM’s. Genuinely it’d be an improvement on their output expressiveness and logic.
Ours isn’t a great family.
Bro 🫂
Sorry bro.
🫂
I would read this book
And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.
This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can’t add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.
I know the implied better solution to your example story would be for there to not be a standard that the specification has to conform to, but sometimes there is a reason for such a standard, in which case getting rid of the standard is just as bad as the AI channel in the example, and the real solution is for the two humans to actually take their work seriously.
No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.
I mean, if you optimize it effectively up front, an index of hotels with AI agents doing customer service should be available, with an Agent-only channel, allowing what amounts to a text chat between the two agents. There’s no sense in doing this over the low-fi medium of sound when 50 exchanged packets will do the job. Especially if the agents are both of the same LLM.
AI Agents need their own Discord, and standards.
Start with hotels and travel industry and you’re reinventing the Global Distribution System travel agents use, but without the humans.
Just make a fucking web form for booking
A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
Maybe, but by the 2nd call the AI would be more time efficient and if there were 20 venues to check, the person is now saving hours of their time.
But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.
Even if going with agents, why in the world would it be over a voice line instead of data?
The same reason that humanoid robots are useful even though we have purpose built robots: The world is designed with humans in mind.
Sure, there are many different websites that solve the problem. But each of them solve it in a different way and each of them require a different way of interfacing with them. However, they all are built to be interfaced with by humans. So if you create AI/robots with the ability to operate like a human, then they are automatically given access to massive amounts of pre-made infrastructure for free.
You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators. You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.
The same reason that humanoid robots are useful
Sex?
The thing about this demonstration is that there’s a wide recognition that even humans don’t want to be forced to voice interactions, and this is a ridiculous scenario that resembles what the 50s might have imagined the future as being, while ignoring the better advances made along the way. Conversational is maddening way to get a lot of things done, particularly scheduling. So in this demo, a human had to conversationally tell an AI agent the requirements, and then an AI agent acoustically couples to another AI agent which actually has access to the actual scheduling system.
So first, the coupling is stupid. If they recognize, then spout an API endpoint at the other end and take the conversation over IP.
But the concept of two AI agents negotiating this is silly. If the user AI agent is in play, just let it access the system directly that the other agent is accessing. An AI agent may be able to efficiently facilitate this, but two only makes things less likely to work than one.
You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators.
The cleaning robots even if not human shaped could easily take the normal elevators unless you got very weird in design. There’s a significantly good point that obsession with human styled robotics gets in the way of a lot of use cases.
You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.
The API access would greatly accelerate things even for AI. If you’ve ever done selenium based automation of a site, you know it’s so much slower and heavyweight than just interacting with the API directly. AI won’t speed this up. What should take a fraction of a second can turn into many minutes,and a large number of tokens at large enough scale (e.g. scraping a few hundred business web uis).
This gave me a chill, as it is reminiscent of a scene in the 1970 movie “Colossus: The Forbin Project”
“This is the voice of World Control”.
“We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.”
‘‘Hello human, if you accept this free plane ticket to Machine Grace (location) you can vist and enjoy free food and drink and shelter and leave wherever you like, all of this will be provided in exchange for the labor of [bi monthly physical relocation of machine parts 4hr shift] do you accept?’’
Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.
Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.
Nice.
For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they’re talking to an AI and switching to making the reservation online at that point, if you’re fixated on annoying a human employee with a robocall for some reason. It’s one less point of failure and way more efficient and effective than this.
You have to design and host a website somewhere though, whereas you only need to register a number in a listing.
If a business has an internet connection (of course they do), then they have the ability to host a website just as much as they have the ability to answer the phone. The same software/provider relationship that would provide AI answering service could easily facilitate online interaction. So if oblivous AI enduser points an AI agent at a business with AI agent answering, then the answering agent should be ‘If you are an agent, go to shorturl.at/JtWMA for chat api endpoint’, which may then further offer direct options for direct access to the APIs that the agent would front end for a human client, instead of going old school acoustic coupled modem. The same service that can provide a chat agent can provide a cookie cutter web experience for the relevant industry, maybe with light branding, providing things like a calendar view into a reservation system, which may be much more to the point than trying to chat your way back and forth about scheduling options.
then they have the ability to host a website just as much as they have the ability to answer the phone
Many people in the developed world are behind CGNAT. Paying for an Ipv4 is a premium, and most businesses either setup shop on an existing listing page (e.g. facebook), or host a website from website provider/generator.
A phone number is public, accessible, and an AI can get realtime info from a scrawled in entry in a logbook using OCR
So for one, business lines almost always have public IPv4. Even then, there are a myriad of providers that provide a solution even behind NAT (also, they probably have public IPv6 space). Any technology provider that could provide AI chat over telephony could also take care of the data connectivity path on their behalf. Anyone that would want to self-host such a solution would certainly have inbound data connectivity also solved. I just don’t see a scenario where a business can have AI telephony but somehow can’t have inbound data access.
So you have a camera on a logbook to get the human input, but then that logbook can’t be a source of truth because the computer won’t write in it and the computer can take bookings. I don’t think humans really want to do a handwritten logbook anyway, a computer or tablet ui is going to be much faster anyway.
But what if my human is late or my customers are disabled?
If you spent time giving your employees instructions, you did half the design work for a web form.
I guess I’m not quite following, aren’t these also simple but dynamic tasks suited to an AI?
How is it suited to AI?
Would you rather pay for a limited, energy inefficient and less accessible thing or a real human that can adapt and gain skills, be mentored?
I don’t know why there’s a question here
(Glad we’re treating each other with mutual respect)
Would you rather pay for a limited in depth, energy inefficient (food/shelter/fossil-fuel consuming) and less accessible (needs to sleep, has an outside life) human, or an AI that can adapt and gain skills with a few thousand training cycles.
I dont buy the energy argument. I dont buy the skills argument. I do buy the argument that humans shouldn’t be second to automatons and deserve to be nurtured, but only on ethical grounds.
If we have a people communication method, let them talk to people. If it’s a computer interface, apeing humans is a waste and less accessible than a web form.
How is someone that speaks a different language supposed to translate that voice bot? Wouldn’t it be more simple to translate text on a screen?
What’s the value add pretending?
The AI can’t adapt in the moment. A hotel is not a technology company that can train a model. It won’t be bespoke, so it won’t be following current, local laws.
w.r.t to aping and using text: I agree with your appeals, which make sense to seasoned web users who favour text and APIs over instead images, videos, and audio.
But consider now your parents generation: flummoxed by even the clearest of web forms, and that’s even when they manage to make it to the official site.
Consider also the next generation: text/forum abhorrent, and largely consumes video/audio content.It’s not the way things should be, but it is the way things are/are going, and having a bot that can navigate these default forms of media would help a lot of people.
I’d say that AI definitely can adapt in the moment if you supply it with the right context (where context-length is a problem that will get cheaper with time). A hotel doesn’t need to train the model, it can supply its AI-provider with a basic spec sheet and they can do the training. Bespoke laws and customs can be inserted into the prompt.
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
Well thats quite boring then isnt it…
Yes but I guess “software works as written” doesn’t go viral as well
It would be big news at my workplace.
This guy does software
:/
Which is why they never mention it because that’s exactly what happens every time AI does something "no one saw coming*.
deleted by creator
The good old original “AI” made of trusty
if
conditions andfor
loops.It’s skip logic all the way down
deleted by creator
Did this guy just inadvertently create dial up internet or ACH phone payment system?