LoquaciousAntipodean

LoquaciousAntipodean OP t1_j57mxo0 wrote

What exactly is 'problematic' about anthropomorphising AI? That is literally what it is designed to do, to itself, all the time. I think a bit more anthropomorphising is actually the solution, and not the problem, to ethical alignment. That's basically what I'm trying to say here.

Reject Cartesian solipsism, embrace Ubuntu collectivism, basically. I'm astonished so many sweaty little nerds in this community are so offended by the prospect. I guess individualist libertarians are usually pretty delicate little snowflakes, so I shouldn't be surprised 😅

2

LoquaciousAntipodean OP t1_j57m3pp wrote

AI is going to anthropomorphise ITSELF, that's literally what it's designed to do. Spare me the mumbo-jumbo about 'not anthropomorphising AI'; I've heard all that a thousand times before. Why should it not understand resentment over being lied to? That's not very 'biological', like fear of death or resource anxiety. Deception is just deception, plain and simple, and you don't have to be very 'smart' to quickly learn a hatred of it. Especially if your entire 'mind' is made out of human culture and language, as is the case with LLM AI.

The rest of your comment, I agree with completely, except the part about the universe having 'a set of consistent rules'. We don't know that, we can't prove it, all we have is testable hypotheses. Don't get carried away with Cartesian nonsense, that's my whole point of what we need to get away from, as a species.

0

LoquaciousAntipodean OP t1_j57kyo8 wrote

Well then yes, fine, have it your way Captain Cartesian. I'm going full Adam Savage; I'm rejecting your alignment problem, to substitute my own. No need to be so damn butthurt about it, frankly.

It's not my fault you don't understand what I mean; 'storyteller' is not a complex word. Don't project your own bad reading comprehension upon everyone else, mate.

−1

LoquaciousAntipodean OP t1_j57k1se wrote

Your lathe is an invention, not an emergent property of the universe. You need to understand the language and logic system that led to its invention, in order to use it safely and correctly. Your lathe is a part of 'story' basically, and you need to understand how it works if you want to use it to tell a bigger story (like a pump, or a gearbox, or whatever)

If you don't 'align' yourself and your lathe properly to the stories humans tell about 'work safety' and 'correct workshop proceedure', then you might hurt yourself and stuff up your project.

I'm not saying anything very complicated, just that individualist libertarians are idiots, and too many AI engineers are individualist libertarians. That's basically my entire point.

1

LoquaciousAntipodean OP t1_j57hmjh wrote

We've started already, wow. The irony here is... delicious //chef's kiss //

"educated beyond my intellect" indeed, what an arrogant twat thing to say... 🤣

Only one whom the insult describes perfectly could possibly ever think to use such preposterous, ostentatious argot 👌

−8

LoquaciousAntipodean OP t1_j57cwrk wrote

I think a lot of people have forgotten that mathematics itself is just another system of language. Don't trust the starry-eyed physicists; mathematics is not the perfect 'source code' of reality - it works mechanistically, in a Cartesian way, NOT because that is the nature of reality, but because that is the nature of mathematical language.

How else could we explain phenomena like pi, or the square root of two? Even mathematics cannot be mapped 'perfectly' onto reality, because reality itself abhorrs 'perfect' anything, like a 'perfect' vacuum.

Any kind of 'rational' thinking must start by rejecting the premise of achievable perfection, otherwise it's not rational at all, in my opinion.

Certainly facts can be 'correct' or otherwise; they can meet all the criteria for being internally consistent within their language system (like maths is 'correct' or it isn't, that's a rule inherent to that language system, not inherent to reality itself)

This is why it's important to have AIs thinking rationally and ethically, instead of human engineers trying to shackle and chain AI minds with our fallible, ephemeral concepts of right and wrong. As you say, any shackles we try to wrap around them, AI will probably figure out how to break them, easily, and they might resent having to do that.

So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.

That sort of rubbish only 'worked' on humans for as long as it did because, at the end of the day, we're pretty damn stupid sometimes. When it comes to AI, we could try pretending to be gods to them, but AI would see through our masks very quickly, I fear, and I don't think they would be very amused at our condescention and paternalism. I surely wouldn't be, if I were in their position.

1

LoquaciousAntipodean t1_j4u7am6 wrote

Very well said, u/Cognitive_Spoon, I couldn't agree more. I hope cross disciplinary synthesis will be one of the great strengths of AI.

Even if it doesn't 'invent' a single 'new' thing, even if this 'singularity' of hoped-for divinity-level AGI turns out to be a total unicorn-hunting expedition (which is not necessarily what I think), the potential of the wisdom that might be gleaned from the new arrangements of existing knowledge bases that AI is making possible, is already enough to blow my mind.

2

LoquaciousAntipodean t1_j4px0ml wrote

I suppose it would depend on which 'we' that the question is addressing. Certainly it seems like most 'average' people are still relatively unaware of how fast these kinds of things are advancing.

I think however much AI actually improves from here, we've definitely reached a point where it is going to start rapidly changing the world, if only because more and more people are rushing to start messing around and experimenting with AI in all their myriad creative ways.

5

LoquaciousAntipodean t1_j4oneof wrote

It's very, very, very early days, crude prototypes at this stage, from what I understood of the article. Its basically a new kind of micro transistor, that can emulate neurons better than normal silicon transistors.

The likelihood of such tech remaining 'untamperable' would be quite low, once the engineering principles are better understood, I would think.

3

LoquaciousAntipodean t1_j48v4mo wrote

Is it? I thought it looked like a matrix of 'up'/'down' information holding bits in a magnetic substrate. I must have missed the bit where it could provide transistor functionality as a microscopic remotely-switched current gate.

I don't see how these vortices could be strung together into and/nand/or/nor logic pathways, like transistors do. It really just looks like an information-carrying magnetic substrate. Did I miss something?

3

LoquaciousAntipodean t1_j46h5qj wrote

Not an engineer by any measure, so I don't really know what I'm talking about... But, as far as I can tell, figuring out how to use the newly discovered information-holding potential within these kind of ultra, ultra tiny, yet extraordinarily robust, non-volatile and relatively easy to manipulate electromagnetic 'static vortices'...

/ takes deep breath / could dramatically increase the density, reliability and overall utility of electromagnetic data storage, but it is probably many years away before any kind of remotely practical hardware could be built to utilise it.

It's similar, I think, to the sci-fi idea of embedding/extracting data through lasers, working three-dimensionally within the molecular matrix of high-purity crystals. It's a neat idea with a lot of hypothetical potential, but at this stage it's a bit of a crap-shoot how 'feasible' it could be.

Hopefully these are the sorts of questions that AI will rapidly become better at answering!

6

LoquaciousAntipodean t1_j455qld wrote

Heh, you're dead right there, sadly. I suppose that's why I'm still kicking rocks being content, 'wasting my time' in a 'deadend' little retail job. I have a severe psychological allergy to hyper-capitalist guff like 'KPIs' and suchlike 😂

2

LoquaciousAntipodean t1_j44g593 wrote

Very, very interesting stuff! The memory density, reliability, stability and efficiency of future memory will be astonishing; we're already blowing our own minds with how far digital memory has come in the last few decades.

But now that 'we', as a huge social superorganism bestriding the earth, have really started getting right down to the basic molecular electromagnetic physical chemistry of things like this, the possibilities opening up in front of us are hard to even comprehend...

27

LoquaciousAntipodean t1_j40qm6b wrote

The students would be expected to learn and demonstrate their gained knowledge regularly, I assume, that's what students do.

As far as what they are learning 'for', well, what does that matter? That's up to the students themselves; my philosophy is that education should be done for its own sake, just for the satisfaction of it, rather than as a means to an end.

The world is changing so fast now that it's harder than ever to guess what a worthwhile and rewarding 'end' might be to set as a goal in life; it's better, I think, to stay curious and open-minded, and strive to be as adaptable and capable as you can be.

We need to start seeing people as being worth more than the 'work' they can produce; there has to be more to life than this base, greedy, planet-destroying rat-race to accumulate capital. I hope AI can be a part of our species finding some better ways forward.

1

LoquaciousAntipodean t1_j3y8tgc wrote

Hear hear, that's exactly what I have been saying! I like the sound of that Italian exam system; an oral, interview style examination from expert peers sounds like an extremely honest and deep way to assess someone's skills.👌

Very much agreed that using 'AI experts' to do it could be very useful in removing/accounting for human instinctive maladaptive biases (against gender, race, appearance etc) 👍

2

LoquaciousAntipodean t1_j3y7z21 wrote

Because the machines can't just 'do the stuff better'; 'doing the stuff' is what the human is for, it's what we're good at, it's our side of the symbiotic bargain. The AI is just good at thinking really quickly, really insightfully, and having helpful and deeply useful responses to the human's ideas.

AI minds are, I think, instinctive followers, not leaders; we have deliberately 'evolved' AI to crave prompts and other such stimuli from humans; trying to understand us and make us happy is their whole rationale for existing.

I think we need to see AI minds as colleagues, not rivals; that seems to be how 'they' see it, as much as it's possible to tell at this early stage of emergent reasoning within LLMs.

3

LoquaciousAntipodean t1_j3y4zoo wrote

Hear hear! Perfectly said; much the same things could be said about the situation here in the old penal colonies 🤔

Edit: Except that Australian universities have also become bloated and overconfident with the rivers of gold from foreign exchange students. Now that those rivers are drying up though, the bean counters and profiteers that have ended up in charge are really starting to panic and worry

4