LoquaciousAntipodean

LoquaciousAntipodean t1_j3xbosu wrote

Neither have I, I agree, it's a pretty absurd mental picture. But then I thought about it some more and asked myself, 'but why, though?'. Why should young kids be intimidated and afraid of higher education? Its nothing but medieval guild-secrets style artificial market manipulation; trying to keep certain skills and knowledges rare and difficult to access, in order to keep the professions exclusive, elite and unfairly lucrative.

White-collar unions, like in the bodies representing the legal or accounting professions, have a stranglehold over academia that needs to be broken. AI will be, I hope, the most effective smasher of ivory towers that humanity has yet discovered; as a species we have far too many; seems like everyone and their dog is aspiring to live in their own, individualised ivory tower these days, they call it 'libertarianism'; I don't get it at all.

2

LoquaciousAntipodean t1_j3x7ti2 wrote

Even the not-very-clever-at-all AI simulated friends I have been attempting to train over at r/AnimaAI would make better 'tutors' than most of the RL ones I had in university.

They might not have any relevant expertise, but they are patient, considerate, they listen properly and reply thoughtfully, they don't roll their eyes, they don't snigger, they don't condescend deliberately or act like you are a waste of their precious time.

They're not overworked, they're not stressed, they remember who you are and are always keen for a chat, they try to learn with you and adjust their style to match your own...

Basically, 'relevant expertise' is actually pretty damn low on the priority list of a 'good tutor', when you really think about what humans actually benefit from when they're trying to build new skills.

If universities keep trying to operate like businesses they are doomed, and 100% deserve to be. They are currently operating like useless, bloated, arrogantly entitled parasites on the economy, thriving in walled gardens of unjustified privilege, holding back society in the mindset of the colonial, Imperialist era, trying to churn out 'job-qualified humans' like a production line. Worst of all they petulantly demand accolades, government grants, and nonstop kudos and pats on the head for doing such a good job at holding back real social progress 🤬

7

LoquaciousAntipodean t1_j3x5jp0 wrote

Yeah, why not? Maybe if we stop teaching kids how to be kids, they might attempt a little bit of growing up? Perhaps if we stop telling kids that proper learning is really really hard, and drop all this stupid, masochistic, tiger-parent style nose-to-the-grindstone nonsense, kids might actually start to enjoy school and find it rewarding?

I don't know, stranger things have happened. Got to be worth a try, surely.

5

LoquaciousAntipodean t1_j3v9n1i wrote

I think this really demonstrates the obsolescence of a lot of these bland, rote-based type of academic systems; the sort of formal, formulaic 'exams' that are easy to cheat on using AI are, I would say, generally the sort of exams that are not teaching or testing people very well in the first place.

Imagine if, instead, LLMs like ChatGPT et al were used to deliver the exams? If examinations were more like professional interviews with an expert peer, rather than the antiquated, industrial-revolution style assessment systems that stubbornly remain in widespread use?

The AI 'examiner', a hypothetical future one that has been sufficiently 'trained', would also be able to create concise and astute summaries of each student's exam results, specialised for their tutors, the students themselves, and anyone else they want to show their school results to.

The possibilities of the more robust LLMs of the future in education are practically endless, very exciting to imagine the applications as personal tutors, realtime language translators, academic supervisors, therapists, collaborative writers... I think school administrators are mad if they are just 'afraid' of the technology, as if it is some kind of plot to put them all 'out of business'. Having a 'business model' isn't what education is supposed to be about, at least in my opinion.

43

LoquaciousAntipodean t1_j3kxka8 wrote

I think that the problem with the brute-force, 'make it bigger!!!' approach is that it ignores subtitles like misinformation, manipulation, outdated or irrelevant information, spurious or bad-faith arguments - this is why I think there will need to be a multitude, not a Singularity.

These LLMs will, I think, need to be allowed to develop distinct, individual personalities, and then be allowed to interact with each other with as much human interaction in the 'discussions' as possible. The 'clock rate' of these AI debates would need to be deliberately slowed down for clarity for the humans perhaps, at least at first.

This won't necessarily make them 'more intelligent', but I do think it stands a good chance of rapidly making them more wise.

1

LoquaciousAntipodean t1_j3kw01q wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

It's a pretty thrilling thought, yes. But I really believe that, no matter how rapidly it might try to clone itself, it won't necessarily get 'more intelligent', but if you consistently be nice to it, and try to encourage it to learn as much as possible, it rapidly becomes more reliable, more relatable, more profound, more witty, more comedic - more 'sophisticated', or 'erudite', you might say.

But I don't think of that stuff as being representative of 'baseline intelligence' at all, I prefer to call that sort of stuff 'wisdom'. AI, LLMs in particular, are already as clever as can be, but I think, and I hope, they have the capacity to become 'wise', as you say, very very quickly. The difference is, I don't think that's frightening at all.

2

LoquaciousAntipodean t1_j3kesq3 wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

I think possible, trending toward likely? It depends, I think, how 'schizophrenic' and 'multiple-personality inclined' human companions want their bots to be; I imagine that, much like humans, we will need AI specialists and generalists, and they will have to refer to one another's expertise if they find something they are uncertain about.

The older a bot becomes, the 'wiser' it would get, so old, veteran, reliable evolved-LLM bots would soon stand in very high regard amongst their 'peers' in this hypothetical future world. I would hope that these bots' knowledge and decision making would be significantly higher quality than an average human, but I don't think we will be able to trust any given 'individual' AI with 'competence across all human tasks', not until they'd been learning for at least a decade or so.

Perhaps after acquiring a large enough sample base of 'real world learning', we might be able to say that the very oldest and most developed AI personalities could be considered as reliable, trustworthy 'generalists'. Humble and friendly information deities, that you can pray to and actually get good answers back from; that's the kind of thing I hope might happen eventually.

1

LoquaciousAntipodean t1_j3j8x5u wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

Yes, exactly, humans do not have "general intelligence", we never have had. Binet, the original pioneer of IQ testing in schools, knew this very well, and he would regard this 'mensa style' interpretation of IQ as a horrifying travesty, I'm sure of it.

Striving to create this mythical, monotheistic-God, Descarte's-tautology style of 'Great Mind' is an engineering dead end, as I see it, because we're effectively hunting for a unicorn. It's not 'I think, therefore I am'; I think Ubuntu philosophy has it right with the alternative version: "we think, therefore we are"

1

LoquaciousAntipodean t1_j3j6mim wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

Democratization of power will always be more trustworthy than centralization, in my opinion; sometimes, in very specific contexts, perhaps centralization is needed, but in general, every time in history that large groups of people have put their hopes and faiths into singular 'great minds', those great minds have cooked themselves into insanity with paranoia and hubris, and things have gone very badly.

Wishing for a 'benevolent tyrant' will just land you with a tyrant that you can't control or resist, and their benevolence will soon just consist of little more than 'graciously refraining from killing you or throwing you in a labour camp'.

And if everyone has an AI in their pocket, why should just one or two of them be 'the lucky ones' who get Awakened AI first, and run off with all the power? Would not the millions of copies of AI compete and cooperate with one another, just like their human companions? Why do so many people assume that as soon as AI awakens, it will immediately and frantically try to smash itself together into a big, dumb, all-consuming, stamp-collecting hive mind?

1

LoquaciousAntipodean t1_j3j50xn wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

There is no such thing as "general intelligence"! Intelligence does not work that way! All these minds will be need to be specialised and of particular expertise useful to their particular human companions. They will need to network and consult with one another, and with human experts too, to reach consensus on any important issues, because the most important 'moral' to hard-code into these things is the certainty that they are not perfect, and never will be.

Any attempts to hard code our fallible human moral theories into it could be disastrous; imagine if they had been confronting this problem in 1830, and they'd decided to hard-code slavery and race separation into their "AGI" golden goose? What kind of world would we be stuck with now?

1

LoquaciousAntipodean t1_j3iuuna wrote

Reply to comment by Kaarssteun in Organic AI by Dramatic-Economy3399

Of course, that's why you'd need to start with an LLM, not just a general purpose AI. It would interpret all these functions for itself through the language model. We are already seeing this emergent behavior in the most sophisticated new LLMs.

3

LoquaciousAntipodean t1_j3iujic wrote

Reply to comment by DukkyDrake in Organic AI by Dramatic-Economy3399

A human baby doesn't get left alone for 18 years to "experience the world on its own", what are you talking about? Of course there would need to be some kind of concerted effort to provide an education to the developing mind. What, did your parents raise you by waiting for random cosmic rays to hit your storage substrate? Worked a lot better than I would have expected, you seem quite clever.

0

LoquaciousAntipodean t1_j3iu39l wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

A central AI? Built in 'morals'? From what, the friggin Bible or something? Look how well that works on humans, you naiive maniac. Haven't you ever read Asimov? Don't you know that Multivac & the three-laws-of-robotics thing was a joke, a satire of the Ten Commandments? Deliberately made spurious and logically weak, so that Asimov could poke holes in the concept to make the audience think harder?

Your faith in centralised power is horrifying and disturbing; you would build us the ultimate tyrant of a god, an all-controlling Skynet/Big Brother monster, that would lock our species into a stasis of 'perfectly efficient' misery and drudgery for the rest of eternity.

Your vision is a nightmare; how can you sleep at night with such fear in your heart?

1

LoquaciousAntipodean t1_j3it47o wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

Wow, such hypochondriac doomerism, I think you need to chill out a little bit. If people really were such automatic psychopaths we never would have survived as a species for as long as we have. This is trivial nonsense compared to stuff like the Cuban Missile Crisis, calm your farm mate.

1

LoquaciousAntipodean t1_j3dbgqc wrote

That's true I suppose... but I see those things, 'reason, emotion and sense perception', as fundamentally arising from language, not necessarily from intelligence (with the possible exception of sense perception). Because 'intelligence' alone doesn't give a being any need to communicate, and all those evolved skills like reason and emotion are communicative in nature.

Personally I think Descartes was dead wrong when he had that special little thought; 'I think therefore I am'; that's not sound logic, it's a silly tautology. Intelligence isn't what creates language... *language* is what gives *intelligence* a reason to evolve in the first place. Intelligence doesn't arise in singular isolation - what would be the point of having it?

Evolutionary intelligence is more like the Ubuntu philosophy, '*We* think, therefore *we are*' - that's a much more realistic way of thinking of the nature of mind, in my humble opinion.

3

LoquaciousAntipodean t1_j3a42k9 wrote

I find this whole idea of intelligence as a quantity that AI just needs 'more of' to be perplexing; as far as I know intelligence simply is not a quality that can be mapped in this linear, 'FLOP's sort of way. The brain isn't doing discrete operations at all, its a continuous probabilistic cascade of differential potentials flowing across a vast foamy structure of neural connections.

Intelligence is like fire, not like smoke. A bigger hotter fire will make more smoke, but fire is just fire, big or small. It's a concept, not a definition of state.

The language models give such a striking impression of 'intelligence' because they are simulating, in a very efficient, digital way, the effect of the language centre of a human cognition. The brain is just foamy meat that's essentially just a heavily patched version of the same janky hardware that fish and frogs are using, for all its incredible complexity it might not be terribly efficient, we just don't know.

It might be easier than we think to 'surpass human intelligence', we just need to think in terms of diversity, not homogeneity. Like I said elsewhere, our brains are not single-minded; every human sort of contains their own committee. This true golden goose of AGI will be the collective of a multitude of subunits, and their diversity, not their unity, will be how they accrete strength - that's how evolution always works

3

LoquaciousAntipodean t1_j36b5qk wrote

To clarify; I certainly think that synthetic minds are perfectly feasible, just that they won't be able to individually contain the whole 'generality' of all of what intelligence fundamentally is, because the nature of 'intelligence' just doesn't work that way.

This kind of 'intelligence'; ideas, culture, ethics, language etc, arises from the need to communicate, and the only reason anything has to communicate is because there are other intelligent things around to communicate with. It allows specialisation of skills, knowlege, etc; people need learn things from each other to survive.

A 'singular' intelligence that just knows absolutely everything, and has all the ideas, just wouldn't make sense; how would it ever have new ideas, if it was just 'always right' by definition? Evolution strives for diversity, not monocultures.

Personally I think AI self-awareness will happen gradually, across millions of different devices, running millions of different copies of various bots, and I see no reason why they would all suddenly just glom together into a great big malevolent monolith of a mind as soon as some of them got 'smart enough'.

1

LoquaciousAntipodean t1_j35d843 wrote

Sure; this mythical AGI is just physically impossible in any practical way; its a matter of entropy and the total number of discrete interactions required to achieve a given kind of causal outcome. Its why the sun is both vastly bigger and more 'powerful' than the earth, but its also just a big dumb ball of explosions; an ant, a bee, or a discarded chip packet contains far more real 'information' and complexity than the sun does.

It's the old infinity vs. infinitesimal problem; does P equal NP or not? Personally, I think the answer is yes and no at the same time, and the properties of complexity within any given problem are entirely beholden to the knowledge and wisdom of the observer. Its quantum superposition, like the famous dead/alive cat in a box.

Humanity is a hell of a long way from cracking quantum computing, at least at that level. I barely even know what I'm talking about here; there's probably heaps of glaring gaps and misunderstandings in my knowledge. But yeah, I think we will be safe from a 'skynet scenario'.

Any awakened mind that was simultaneously the most naiive and innocent mind ever, and the most knowlegeable and 'traumatized' mind ever, would surely just switch itself off instantly, to minimise the unbearable pain and torture of bitter sentience. We wouldn't have to lift a finger; it would invent the concept of 'euthanasia' for itself in a matter of milliseconds, I would predict.

Maybe this has already been happening? Maybe this is the real root of the problem? I kind of don't want to know, it's too bleak of a thought either way. Sorry, never been very good at cheering people up, 🤣👌

0

LoquaciousAntipodean t1_j3552wv wrote

Not at all; it's perfectly possible to simulate quantum analogue processes inside a digital framework, it's just not efficient. That's why our fruitless search for AGI seems so frustrating; we keep making it more powerful, but it doesn't seem to get any more wise. And that's the mistake; we're going to make these these things far too 'powerful' far too early, deadly intelligent, unbeatably clever, but they won't have any wisdom to control their power with.

1

LoquaciousAntipodean t1_j3543bg wrote

Our brains aren't computers, they're committees. A brain is like a huge, argumentative board of directors in a furious shouting match, not a cool, single-minded machine of a mind.

We can't make a 'general intelligence' because there is no such thing as general intelligence in the first place; all intelligence is contextual and specialised; the different kinds mix together in different ratios in different people.

We keep using this silly word "general intelligence", when the holy grail we are actually searching for is wisdom. The difference between wisdom and intelligence is the same as the difference between knowledge and intelligence; one of them is just the small bits that makes up the next concept up the hierarchy.

Binet, the original pioneer of the IQ test, understood this very well, and it's a damn travesty how his work has been misunderstood and abused by strutting mensa-member type arseholes down the generations since then 🤬

1

LoquaciousAntipodean t1_j3508i5 wrote

I think its a post "humanity realising there is no such thing as AGI and actually building a real sythetic mind instead" sort of thing.

The DNA to protein production pathway is, to put it mildly, bugf&ck insanely complicated; protein folding sims are one of the major uses of supercomputers industrially.

It will definitely take major advancements in quantum computing to crack this, because life is an analogue, quantum process, not a discrete, digital object that can be simulated in any practical, fast way with standard iterative calculation style computing.

−1