. . . intelligent in any manner. On what basis could the 'AI' justify any of its responses or extrapolate from them or deduce anything from them?
As is often said with respect to fascism, "Don't surrender in advance." The same is true of 'AI'. One probably should not grant them acceptance so easily: if such acceptance is merited, sure at that point it could be granted, but for now, it is surrendering in advance to set up and publish such an 'interview' scenario.
Most everything that is contained in the early ( up to about 5 minutes into the video when 'AI' model training is mentioned ) response Claude [sic: 'Clod'] makes is a rehashing of the complaints against social media in the last 12 years but with the term "AI" put in place of "algorithm-based social media".
Note how laudatory the responses are. That is corrosive to objectivity and does not serve any sense of any truth that might exist.
Also, note how little is said of the fact that these models are based purely on the appropriation of huge amounts of copyrighted material. The underlying data that supports the statistics these models use is essentially stolen.
In what sense does the model have any 'belief' in the correctness of what Sen. Sanders says late in the video ( see 7:55 in the video, Claude: "You're absolutely right, Senator." )? And when the 'AI' says "I was being naive.", what exactly does that mean? Is that indicative of a mental state which has now been altered by the course of the discussion and has propagated across the entire 'AI' model? Not at all. Ask the 'AI' the same question but argue for a different result: will the 'AI' pliably shift to support than other answer? Most likely.
If you have never tried it, go check how these models do translations from one language to another, but check to see if the translations form a closed loop ( i.e., they should not be altered when going from one language to another and then back to the original language, but they often times do exactly that ). Google's translate seems to have been particularly bad ( often cycling a translation into nonsense ) the last time I checked.
At the end, asking what the 'AI' 'thinks' is just more surrendering in advance to the notion that these tools are in anyway fit to be considered analogous to people.
So, I would just propose that presenting such a video as any sort of authentic exchange is very dangerous. People who literally do not understand anything about technology might very well be taken in by the illusion of that these 'AI' models represent sentience in a box. Nothing is further from the truth: they have no embodiment or, more literally, no skin in the game.
Usually, I would agree with Sen. Sanders's argued positions, but what end does such a video as this one serve? I would argue as above that this acceptance ( even if mock acceptance ) is dangerous.