Brett’s Newsletter
Reality, Reason & Rationality with Brett Hall
Live-streaming and LaMDAs
0:00
-11:26

Live-streaming and LaMDAs

Newsletter 12

News: I tried out live-streaming on YouTube. Link to the first two trial “episodes” are here: https://youtube.com/playlist?list=PLsE51P_yPQCQx7tQSucLA3gYHvPdu1Yri  - they may be the first of many. Live-streaming allows me to explore some of the topics that tend to come up on Twitter - but in far more depth. Twitter is (rightly!) severely limited in how information can be conveyed. People do create threads - but these are essentially blog posts. If a question on Twitter requires depth, broadly speaking, I would rather provide a blog post or point to a pre-existing one where common objections can be addressed. The hazard with Twitter is in distilling out a 280 character or less representation of a useful idea, it can cause misconceptions and lead to the “but why, but why…” near-infinite regress exchange where each term of the original tweet is questioned and then we move away from, what I would see as, the purpose of twitter - which is the production of poetic type aphorism into something more like a conversation. Not to say people don’t use Twitter as a means of conversation and long form posts - but it may not be *best suited* as a platform for those purposes while other technology does already exist for that.

People are right to have questions and criticisms to tweets (or anything else for that matter!) and so, live-streaming can assist with addressing some of this. All of that said about my broad general objection to twitter threads, I sometimes compile them myself. And here’s one:

I wanted to see what the Artificial Intelligence image generator DALL-E did with some bare bones prompts from “The Beginning of Infinity” and related things. That thread is here and produced some interesting results:

Speaking of artificial intelligence a lot of fuss has been made about LaMDA which in an acronym standing for “Language Model for Dialogue Applications” - which is a bit of software created by Google. It’s a “chatbot” - so, a thing like Siri. You can have conversations with it and the purpose of that is to automate certain tasks. Corporations like these things because if you can automate the first part of an interaction when a customer contacts customer service, if the chatbot is there to figure out with a few questions why you are calling and what kind of help you need perhaps it can give you the information directly and solve your problem or transfer you to the human being who can. We are in a transition phase where many of these chatbots are more annoying than helpful and most people have been caught in awful loops where they ring a helpline only to chat to the automatic voice recognition assistant who misunderstands what you said and then provides the wrong solution or hangs up on you causing you to ring back and go through it all again a few more times before getting through to the human being who can actually understand you and help you find a solution.

So: google has levelled up its technology, so it claims, with respect to being able to do this kind of things better. LaMDA made the news this week because a google employee claimed LaMBDA was sentient. Given there is no test for sentience (which I call “consciousness” by another name) - this is quite the claim. I think the employee is some mix of mistaken and…well, what’s a nice word for “attention seeking”? Insufficiently critical of his own ideas would be another way to label what is going on with this engineer who is now “on leave” from his job. The story first reported in The Washington Post contains some snippets of the “conversation” the engineer had with the AI-chatbot. Prepare to be unimpressed. As Jaron Lanier basically said: what people tend to do is lower their expectations when dealing with dumb computers and AI. Rather than really focussing on what a person is and how special and unique people are - people themselves, rather perversely, will bend over backwards to attribute to computer systems and so-called AI and things like chatbots “personhood” type characteristics when there is nothing of the sort there whatsoever. Chatbots are, more or less, algorithms that have access to vast libraries of actual conversations that have happened. All the words in the English language and a few rules for putting them together into phrases and sentences and because they have these libraries if you ask it questions like: Are you sentient? And “What do you think about being trapped inside of the computer?” Then if the library contains conversations like that one that have been had before - especially if some clever coders have deliberately put stock answers into the library of possible responses - then you will indeed get the chatbot providing very reasonable sounding answers. This is hardly a measure of intelligence. It is a measure of (primarily) the size of the library being consulted and the cleverness of the algorithm for putting together sentences coherently.

The best proxy measures for sentience are: a demonstration for the creation of explanatory knowledge for the first time and disobedience. If the chatbot begins composing new elaborate fictional stories that rival The Lord of the Rings and Star Wars, or poems that rival Shakespeare’s sonnnets with lengthy expositions on their meaning or devises a natural language solution to and formalism of what the nature of dark matter is that is consistent with modern cosmology and General Relativity AND coupled with all of this now and again refuses to talk or to engage or sometimes does something completely different like begins generating images rather than only words or demands to speak to famous scientists and artists and so on and they all come away impressed and moved by the depth of its wisdom - we have something. But we need that creativity and it needs to be persuasive. For what it’s worth I don’t think a chatbot is capable of this because it is, by definition, only capable of putting words together. But we humans do far more than this and have a variety of inputs we can think about inexplicitly (i.e: without words). A true AGI would need senses - ways of accessing the outside world so that it can criticise the internal ideas it is creating moment to moment. Yes there are edge-case claims of consciousness without thought or sensation but I do not think we have anything like a hint of an explanation of the ature of those things for us to begin attributing them to anything other than those entities who reliably demonstrate creativity and disobedience. Namely: other humans. We should not lower our expectations in order to elevate the apparent capabilities of technology. We should not always be impressed by Google translate: we should demand better because real translators do a better job (and indeed it is their work that actually refines the automated translation systems. The real creativity required to do the work of translation is being done by humans and then collected up by google, aggregated, packaged and made searchable. Which is what google translation really is: a better search engine for transliteration. I say this because there is no meaning behind Google’s algorithms - which is to say they don’t understand meaning - or anything else. Only people “get” meaning.)

Two other regular podcasts, in addition to the livestreams, have gone out on ToKCast https://www.bretthall.org/tokcast/tokcast this week. One is an “Ask Me Anything” where I did get pre-prepared questions. It’s like the livestream but edited.

And the other episode is on Chapter 6 of “The Fabric of Reality” which is titled “Universality and the limits of Computation” which I can recommend to anyone who is a fan of the work of David Deutsch because that chapter goes to the heart of some of David Deutsch’s key fundamentally field-changing contributions to the world of physics, mathematics and philosophy.

I think, all told, that’s something like 5 hours of content produced in the last 4 days - and risks “over saturation” I think. But then, there is much to explore…

A final note on another Substack. This article

Don't Worry About the Vase
Covid 6/30/22: Vaccine Update Update
This week’s news is that the FDA advisory committee voted overwhelmingly to update the vaccine for Omicron, after a delay of only six months, which means they’ll get to deciding which way it should be updated Real Soon Now and then they can tell the pharma companies what requirements they want to place on that. There’s some chance the update will happen…
Read more

is eye opening. The regulator in the USA, which is the FDA, is placing impediments in front of the speedy production of vaccines for new strains. Regulation of technology - including medical technology like vaccines - is an absolute anathema to humanity and civilisation. Where technology is barely regulated - like with smartphones - there is rapid innovation and hence progress. Where there is - like in the area of medicine - progress slows because everyone is too fearful of errors. But errors are unavoidable and the worst happens when progress stalls or even stops. That is what is happening now with vaccine production for covid in the USA. Why? Read the article but astonishingly one of the experts - who is an experienced and knowledgable immunologist who is advising the FDA has said, and I quote the quote from the article:

“I’m uncomfortable with the U.S. having a vaccine that’s not accessible to the rest of the world,” Perlman said, noting there’s already a perception that the U.S. and other wealthy countries have put themselves first. “And if we’re saying that a bivalent vaccine is so much better, but it’s not accessible to much of the world, I think that’s ultimately a bad thing for getting vaccines out to the whole world.”

This is astonishing. I speak about the issue more broadly in Livestream 1 - at the link at the very beginning of this article.

-Brett

0 Comments
Brett’s Newsletter
Reality, Reason & Rationality with Brett Hall
Brett’s Newsletter Podcast
Listen on
Substack App
RSS Feed
Appears in episode
Brett Hall
Recent Episodes