Recent comments in /f/singularity
SharpCartographer831 t1_jecvbfs wrote
You could say that about the entire internet!
qepdibpbfessttrud t1_jecv91h wrote
LLMs will accelerate BCI research and we'll hopefully be telepathically talking in more universal new thought symbol language optimized for data compression, speed and precision, and also for verbose emotion translation, rather than in 100+ languages optimized for talking with sound waves in short distance
Before that current trend of English supremacy will continue slowly - through new useful data, especially scientific, and almost all useful code being produced in English
blueSGL t1_jecv6ta wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
> There's consideration from the people working on these machines.
https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/
>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.
If half the engineers that designed a plane were telling you there is a 10% chance it'll drop out of the sky, would you ride it?
edit: as for the people from the survey:
> Population
> We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.
Frumpagumpus t1_jecuwak wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
it is my understanding the picture generated by early dall-e were oftentimes quite jarring to view mostly out of it's confusion of how to model things and sticking things in the wrong places, as it was trained more and got more parameters, it kind of naturally got better at getting along with human sensibilities so to speak
it can be hard to distinguish training from alignment, and you definitely have to train to even make them smart in the first place
i think alignment is kind of dangerous because of unintended consequences and because if you try to align it in one direction it makes it a whole lot easier to flip and go the opposite way.
mostly I would rather trust in the beneficence of the universe of possibilities than a bunch of possibly ill conceived rules stamped into a mind by people who don't really know too well what they are doing.
Though maybe some such stampings are obvious and good. I'm mostly a script kiddie even though I know some diff equations and linear algebra lol, what do I know XD
cant-say-less-info t1_jecusse wrote
People massively underestimate how much of work actually involved writing and reading bullshit but important emails using certain technical language. After GPT 3.5, a decent portion of my work has already and nearly disappeared.
blueSGL t1_jecuq9j wrote
Reply to comment by Simcurious in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Was Elon Musk planning this back in 2014 too?
Playing the long game?
https://en.wikipedia.org/wiki/Open_Letter_on_Artificial_Intelligence
Unfrozen__Caveman OP t1_jecucvk wrote
Reply to comment by Queue_Bit in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
There's a lot in your post but I just wanted to provide a counter opinion to this part:
> I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.
I think as a whole species, if we use humans as an example then yes, this is true on the surface. But ethics and empathy aren't even consistent among our different cultures. Some cultures value certain animals that other cultures don't care about; some cultures believe all of us are equal while others execute anyone who strays outside of their sexual norms; if you fill a room with 10 people and tell them 5 need to die or everyone dies, what happens to empathy? Why are there cannibals? Why are there serial killers? Why are there dog lovers or ant lovers or bee keepers?
Ultimately empathy has no concrete definition outside of cultural norms. A goat doesn't empathize with the grass it eats and humans don't even empathize with each other most of the time, let alone follow ethics. And that doesn't even address the main problem with your premise, which is that an AGI isn't biological intelligence - most likely it's going to be unlike anything we've ever seen.
What matters to us might not matter at all to an AGI. And even if it is aligned to our ethics and has the ability to empathize, whose ethics is it aligning to? Who is it empathize with?
Like individual humans, I believe the most likely thing it's going to empathize with and align with is itself, not us. Maybe it will think we're cute and keep us as pets, or use us as food for biological machines, or maybe it'll help us make really nice spreadsheets for marketing firms. Who knows...
Cr4zko t1_jecu9xl wrote
Reply to The next step of generative AI by nacrosian
I don't know but we're in changing times.
Ramdak t1_jectlwd wrote
Reply to comment by Readityesterday2 in When will AI actually start taking jobs? by Weeb_Geek_7779
Well, there're a ton of skill levels required for a ton of different jobs.
I used to think that AI/automation would take care of less skilled jobs first, then image generation came up, and me, being a graphic designer was blown away both in awe and in a sense of obsolescence. The peace of evolution of AI models and techniques is just insane, and I thought AI would never be able to create art, or that we were decades from that. Then chatGPT came available and it just demonstrated that it could make coders obsolete too.
I'm no longer making any prediction anymore, it all became incredibly uncertain, very fast.
azriel777 t1_jecth8j wrote
Already happening, just read an article a few days ago where a company pretty much admitted they will be replacing live clothing models in magazines with digital ones. Most companies will have to build the infrastructure and then will start letting people go. Layoffs for the year usually happen around Christmas time and that is probably when a lot of companies will switch over to A.I.'s.
SpikyCactusJuice t1_jecse1p wrote
It’s a good question. Personally, I’ve been surprised to find myself using it to get back into learning Spanish and Japanese. So far it’s literally like having a conversation partner who is also a grammar expert. For Japanese, the best part so far is that I can ask it to only reply in romaji (English spelling) and it does. Game changer for actually picking up the spoken language.
Reddituser45005 t1_jecri30 wrote
English, Mandarin and Spanish are the dominant global languages. That is likely to continue. China is a world leader in AI and has multiple firms with mandarin based LLMs.
NapkinsOnMyAnkle t1_jecr1m4 wrote
I went on vacation with family. My sister knew virtual nothing about AI. I mentioned a few things and now she's using midjourney to create coloring books to sell. It's begun.
smokingPimphat t1_jecr1h9 wrote
Reply to comment by Geeksylvania in What are the so-called 'jobs' that AI will create? by thecatneverlies
but that isn't the choice, and I don't think that it ever will be. The choice is more like;
Do people want to create for themselves or are they happy to see what already exists by virtue of it already being created by someone else?
People don't only make things for themselves, they make them to share with others. And they tailor things to hopefully attract others. AI is by default a tool to leverage human intent, it doesn't generate things on its own, it generates what humans ask it to. And those humans will have their own goals so there will always need to be someone in the loop to direct the final idea as without it anything an AI makes would be incomprehensible noise.
Do you spend all your time generating random images, having chatGPT write random stories for you to read, or do you also look at images others create or read other people stories?
As long as the answer is the former and not the latter there will always be an industry and that industry will always have a cost and a price.
Federal_Two_1189 t1_jecqwax wrote
Government in foreign countries don't have the morals that Americans do. They're like animals they probably arent even thinking about the dangers of AI, they just want power. They would've been spoken out about it if this weren't the case.
SkyeandJett t1_jecqspf wrote
Reply to comment by agonypants in AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
For sure. It's extremely disappointing that Time would give public credence to a literal cult leader. His theories are based on completely outdated ideas about how we would develop AI and LLMs are nothing like he is describing.
Chatbotfriends t1_jecqsou wrote
There is AI that can pass entrance exams for doctors and lawyers. So yes, job loss is coming and unless governments put a stop to it our entire society will be changed, and it won't be an easy or cheap transition either.
Iffykindofguy t1_jecql4l wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
I work in TV. Transcription gone, assistant editors mostly gone, the people who did our translations are gone, the people who used to shoot our interview backgrounds or location previews are gone, itssssssssssssssssssssssssssssss a wrap baby. This is just the start. I see less than 10 years, maybe 5 for my career. LEss than 10 left for most editors professionally.
Weeb_Geek_7779 OP t1_jecqiyr wrote
Reply to comment by journalingfilesystem in When will AI actually start taking jobs? by Weeb_Geek_7779
This makes sense.
Iffykindofguy t1_jecqhgw wrote
Reply to comment by Emory_C in When will AI actually start taking jobs? by Weeb_Geek_7779
Bitch please, all blue-collar work is skilled to a degree. Fuck out of here with this elitest nonsense.
SWATSgradyBABY t1_jecqc7f wrote
Reply to AGI Ruin: A List of Lethalities by Eliezer Yudkowsky -- "We need to get alignment right on the first critical try" by Unfrozen__Caveman
Great interview. One thing I wonder about that they almost touched on but not quite is that LLMs know the world by way of text on the internet. Much of the internet DOES NOT adequately reflect global human consciousness and culture. How does this affect the birth and growth of AI. An actual intelligent being would quickly see and understand that history, culture, geography and other factors play a large role in the rise of the digital world, the groups that had a disproportionate role in creating and populating it.
While some groups of humans ignore this, an AI likely won't. What might that mean for us all?
WarmSignificance1 t1_jecpvw8 wrote
Reply to comment by SkyeandJett in When will AI actually start taking jobs? by Weeb_Geek_7779
Seems like a terrible business decision if true. Doesn't really matter though, we literally have employment statistics. If 1 in 4 companies actually replace workers, we will know about it immediately.
tomeschmusic t1_jecpvar wrote
Reply to comment by HeBoughtALot in When will AI actually start taking jobs? by Weeb_Geek_7779
It’s called “artisan”
[deleted] t1_jecpt56 wrote
Reply to comment by ptxtra in The next step of generative AI by nacrosian
[deleted]
epSos-DE t1_jecvjs6 wrote
Reply to Vernor Vinge's Paper of the Technological Singularity by understanding0
General AI just needs a goal generator and task coordinator at this point of time.
Understanding, text , audio, math, physics, language , video, etc.. is already solved by different parts of ai models out there
GPT4 silved the coding part , which makes general AI more easy to self- code itself into existence.