vivehelpme
vivehelpme t1_j9kqb1k wrote
The Donald Trump of early 2000's era of AI/futurology blogging, biggest and loudest opinion in the noble field of standing at the sidelines speaking as a confident expert about something that doesn't exist to start with, and that he is completely uninvolved in developing as a punchline.
Loud voice, huge ego, no substance. Seems to be enough to get a good following.
vivehelpme t1_j96uht6 wrote
Reply to comment by lehcarfugu in What’s up with DeepMind? by BobbyWOWO
>to its business model.
Googles business model seems to be sitting around doing nothing
vivehelpme t1_j8i04sx wrote
Reply to comment by DukkyDrake in Altman vs. Yudkowsky outlook by kdun19ham
What alignment really seems to refer to is a petrifying fear of the unknown dialed up to 111 and projected onto anything that a marketing department can label AI, resulting in concerns of mythological proportions being liberally sprinkled over everything new that appears in the fields.
Thankfully these people shaking in the dark have little say in industry and some exposure therapy will do them all good.
vivehelpme t1_j8hiksi wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
Yudkowsky and the lesswrong community can be described as a science-fiction cargo cult, and that's putting it nicely.
They aren't experts or developers of ML tools. They take loosely affiliated literary themes and transplant them to reality, followed by inventing a long series of pointless vocabulary, long tirades, grinding essays doing rounds on themself with ever more dense neo-philosophical contents. It's a religion based on what best resemble zen koans in content but are interpreted as fundamentalist scripture retelling the exact sequence of future events.
I think the cargo cults would probably take offense at being compared to them.
vivehelpme t1_j7vfol7 wrote
Reply to comment by CeFurkan in [D] Are there any AI model that I can use to improve very bad quality sound recording? Removing noise and improving overall quality by CeFurkan
Instead of trying to salvage the original recording why not recreate it by putting the text transcript into a text-to-speech model?
As you have it transcribed you don't even need to do any advanced speech recognition that filters the noise, just paste the text into something a bit more advanced than Microsoft Sam
vivehelpme t1_j7vcbx5 wrote
Reply to [D] Are there any AI model that I can use to improve very bad quality sound recording? Removing noise and improving overall quality by CeFurkan
Transcribe them and put the transcripts in TTS
vivehelpme t1_j7uy2xi wrote
Reply to comment by helpskinissues in Generative AI comes to User Interface design! This is crazy. by RegularConstant
More likely code generation with restricted scope, that looks clean in a way that you don't get with image generation
vivehelpme t1_j72b8g5 wrote
AI is a tool, not an evolutionary species.
vivehelpme t1_j6roxsz wrote
Reply to comment by TheSecretAgenda in What is your opinion of what is going to happen between AGI and Singularity. by CertainMiddle2382
UBI won't happen but everything else we already have.
vivehelpme t1_j6o0mzr wrote
Reply to comment by MrEloi in Meta's chief AI scientist says "ChatGPT is not innovative". by ZaKodiak
Yann LeCun was programing neural networks before before most of the people who replied in this thread was even born, he made optical character recognition work on hardware that makes your smartfridge looks like a supercomputer. He doesn't say this from a sore loser point of view but to understand where he come from you need to look at what he's actually saying.
He didn't say chatGPT sucks.
He didn't say meta have something better.
He said it's "not particularly innovative". Which is true.
- Transformers models have been around since 2017.
- Language models are more numerous than anyone can keep track of.
- Dialogue oriented fine tuning have been made before.
- Virtually all names in big tech are training large language models.
Why is chatGPT doing so well then?
- It's a big model, which is hardly new, but it's accessible and usable for free.
- openAI have a good media traction, so they make a splash even when they show off closed models.
So when LeCun says it's "not particularly innovative", is he really wrong? Is being a known name and giving out free stuff^((arguably they are datamining the public for additional training data, which makes the free part a little bit less free)) considered innovative?
vivehelpme t1_j6cno58 wrote
Reply to comment by currentscurrents in [N] OpenAI has 1000s of contractors to fine-tune codex by yazriel0
22 hours of video content per day?
vivehelpme t1_j69okrj wrote
Reply to [D] Interviewer asked to code neural network from scratch with plain python on a live call. Reasonable? by OkAssociation8879
Just have chatGPT open at a side monitor and type in the prompt on a silent keyboard.
vivehelpme t1_j5zzkwj wrote
Reply to comment by Ashamed-Asparagus-93 in Humanity May Reach Singularity Within Just 7 Years, Trend Shows by Shelfrock77
>he said AI will be as smart as a human by 2029.
He completely blew it because he didn't predict how stupid people we'd start breeding during early 2000s
vivehelpme t1_j5y70zt wrote
>what is very special about the model than the large data and parameter set it has
OpenAI have a good marketing department and the web interface is user friendly. But yeah there's really no secret sauce to it.
The model generates the text snippet in a batch, it just prints it a character at a time for dramatic effect(and to keep you occupied for a while so you don't overload the horribly computationally expensive cloud service it runs on with multiple queries in quick succession), so yeah definitely scaling questions before it could be ran as a google replacement general casual search engine.
vivehelpme t1_j5tds9a wrote
Reply to comment by AsuhoChinami in Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
>I get tired of the 'nothing's ever worth being excited about' attitude in general when it comes to anything and everything tech-related.
The varying degree of excitement come from how introduced you've been to the tech precusors. A matter of perspective if you will.
If you never heard of a language model before and try chatGPT you'll probably be quite impressed.
On the other hand if you read the transformers paper in 2017 and tried every single transformers architecture language model implementation since then you're kind of served up a slightly improved version with a slick engineered presentation, sure it's an impressive package but you've seen iterations build towards it and might even have seen some model that does a better job in certain domains.
Which is from where you get headlines like
>ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist
vivehelpme t1_j5tc4fv wrote
Reply to comment by Borrowedshorts in Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
It have been trained on a volume of human generated data greater than any single individual ever have consumed in an entire lifetime.
So it have a lot of real world models and experiences that have shaped it, it's just that all of these are second hand accounts passed over to it.
vivehelpme t1_j5tbfxy wrote
Reply to comment by civilrunner in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
>arr futurology
Climate doomers and green-tech news report central. I'm more disappointed every time I look into that sub.
vivehelpme t1_j5tb6os wrote
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
For factual accuracy the V2 rockets were a german weapon so if you had nukes on them you'd be defeating the commies and allies in 30 minutes.
vivehelpme t1_j9p8viz wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
We have had human level general intelligence for tens of thousands of years and we've not progressed to superhuman general intelligence yet.
General human level intelligence also starts quite low and go quite high, I would say that we're already beyond the lower reaches of general human intelligence.
To say that AGI will instantly transition to ASI is buying into a sci-fi plot or going to beat the early 2000s futurology blogging dead horse where it's assumed that any computer hardware is overpowered and all the magic happens on the algorithm level, so once you crack the code you transition to infinite intelligence overnight, a patently ridiculous scenario where your computer for all intents and purposes casts magical spells(which worked pretty well for the plot of the Metamorphosis of Prime Intellect which I recommend as a read, but it's a plot device, not a realistic scenario)