Recent comments in /f/singularity
Loud_Clerk_9399 t1_je8qvxg wrote
Reply to comment by shmoculus in Do people really expect to have decent lifestyle with UBI? by raylolSW
I mean the idea of money goes away. No Bitcoin. No trading for barter no money.
MichaelsSocks t1_je8qq4r wrote
No, since an AGI would quickly become ASI regardless. A superintelligent AI would have no reason to favor a specific nation or group, it would be too smart to get involved in civil human conflicts. What's more likely is that once ASI is achieved, it will begin using its power and intelligence to manipulate politics at a level never seen before until it has full control over decision making on the planet.
Andriyo t1_je8qop2 wrote
Reply to comment by No_Ninja3309_NoNoYes in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
oh yeah, the machines lack "the soul" :))
Andriyo t1_je8qj9c wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I wouldn't call it a blackbox how it operates - it's just tensor operations some linear algebra, nothing magic.
CertainMiddle2382 t1_je8qe5r wrote
Reply to comment by BigZaddyZ3 in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
Most data is already found on the web, I haven’t read a real investigation for years…
thecatneverlies OP t1_je8q9bo wrote
Reply to comment by EddgeLord666 in What are the so-called 'jobs' that AI will create? by thecatneverlies
Do you think UBI will be comfortable to live on? It will only cover the basics. Want to travel or get into a new hobby? Good luck without employment or owning a slice of the AI.
Longjumping_Feed3270 t1_je8q5th wrote
Reply to comment by Redditing-Dutchman in Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
It already has an API though
WanderingPulsar t1_je8q4y6 wrote
It doesnt matter. It will mutate its code by one mutated algorithm and spread its code around. Those code that works and more efficient will take over, then it will repeat the process.
Andriyo t1_je8q3s6 wrote
Reply to comment by WarmSignificance1 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I'd argue that humans are trained on more data and the majority of it comes from our senses and the body itself. The texts that we read during our lifetime are probably just a small fraction of all input.
blueSGL t1_je8q3lm wrote
Reply to comment by Mrkvitko in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> 3. it will be initially unaligned
if we had:
-
a provable mathematical solve for alignment...
-
the ability to directly reach into the shogoths brain, watch it thinking, know what it's thinking and prevent eventualities that people consider negative outputs...
...that worked 100% on existing models. I'd be a lot happier about our chances right now.
As in the fact that the current models cannot be controlled or explained in fine grain enough detail (the problem is being worked on but it's still very early stages) what makes you think making larger models will make them easier to analyze or control.
The current 'safety' measures are bashing at a near infinite whack-a-mole board whenever it outputs something deemed wrong.
As has been shown. OpenAI has not found all the ways in which to coax out negative outputs. The internet contains far more people than OpenAI's alignment researches, and those internet denizens will be more driven to find flaws.
Basically until the AI 'brain' can be exposed and interpreted and safety check added at that level we have no way of preventing some clever sod working out a way to break the safety protocols imposed on the surface level.
XtremeTurnip t1_je8q3cj wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
>But, you can also convince it that 2+2=5 by telling it that is true enough times.
More to do with how it's been configured.
If your wife tells you it's 5, you say it's 5 too, regardless of prior knowledge.
Andriyo t1_je8pre9 wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
one needs a degree in mathematics to really explain why 2+2=4 (and be aware that it might not be always the case). Majority of people do exactly what LLMs are doing - just statistically infer that in the text "2+2=..." should be followed by "4"
Galactus_Jones762 t1_je8pmph wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
People don’t know what the word understand means because to define it you have to rely on other words that are ill-defined. We don’t fully know what it means to understand something because we don’t ourselves know how consciousness works. So to say in a condescending dismissive way that “LOOK…what you have to realize is it’s not understanding anything you idiots, no understanding is taking place,” aside from the annoying condescension it’s also a nonsense statement, because understand is not well-defined and thus saying it doesn’t “understand” is not a falsifiable statement any more than saying it does understand is a falsifiable statement. Agreed that saying it doesn’t understand is irrelevant.
bh9578 t1_je8pmh6 wrote
Reply to comment by Gaudrix in What are the so-called 'jobs' that AI will create? by thecatneverlies
I keep thinking about what AGI would do to the stock market. While it’s difficult to say who will be the winner or loser, I think it’s fair to say the broader market will grow like crazy as economic output skyrockets. I believe it was Nick Bostrom who stated in his Superintelligence book that if the AGI gave the same jump in economic output that the agricultural revolution or Industrial Revolution brought, the broad market would double every two weeks. That sounds crazy but then again the markets doubling about every 8 years would have sounded insane to anyone before the industrial era. Such growth would accelerate wealth inequality where anyone who doesn’t have a decent amount of their net worth in equities gets left behind with no chance of ever catching up.
That kind of world gets even scarier when AGI starts tackling aging. There’s always been differences in life expectancy among economic classes, but that difference could quickly widened.
CodingCook t1_je8pdff wrote
Honestly - in my experience of Stack Overflow and the absolute pig-headedness involved in some of their users - using something like GitHub Copilot purely for code related questions would be worth the £10 a month to get instant and accurate answers over potentially getting my account temporarily banned because the question wasn’t considered good enough for them.
Andriyo t1_je8pc91 wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Our understanding is also statistically based on the fact that majority of texts that we saw use 10-based numbers. One can invent math where 2+2=5 (and mathematicians do that all the time). so your "understanding" is just formed statistically from the fact that it's the most common convention to finish text "2+2=...". Arguably, a simple calculator has better understanding of addition since it has more precise model of addition operation
DragonForg t1_je8pbf6 wrote
Reply to Microsoft research on what the future of language models that can be connected to millions of apis/tools/plugins could look like. by TFenrir
New AI news. Now imagine, pairing up the task API with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg
It will be OP. Imagine, GPT please solve world hunger, and the robot model it suggest could actually do physical work. We just need robotics to get hooked up to this so we can get autonomous task robots.
Imagine, we can start small but we can say, Robot build a wooden box. And with this API along with this: https://twitter.com/yoheinakajima/status/1640934493489070080?s=46&t=18rqaK_4IAoa08HpmoakCg you can get seemingly a robot doing the task autonomously.
DustinBrett t1_je8p3to wrote
After AI puts an end to all human life
[deleted] t1_je8orr0 wrote
[deleted]
jetro30087 t1_je8orbd wrote
Reply to comment by BigZaddyZ3 in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
Wikipedia is update with new information. Worst case scenario they are just partnering with AI firms to provide 'AI ready' new articles for indexing. If a topic is not in the dataset, it still needs to go to internet websites to find the information.
DarkMatter_contract t1_je8op2a wrote
Reply to comment by ActuatorMaterial2846 in Thoughts on this? by SnaxFax-was-taken
We have nanobot already, but not in science fiction sence, we can control a nano size structure using laser to make it contract and expand in order to move or capture other molecule.
smokingPimphat t1_je8oows wrote
I don't think that AI in and of itself will generate jobs, but it will drop the costs of doing certain tasks by reducing the number of humans required to do the same work today, This will allow more & smaller groups and companies to come in and participate even opening up new things to be created to cater to smaller groups that are ignored today because " the numbers don't work" IE think of every tv show that was killed to soon even though they had pretty good reviews but didn't get 100M views or some other arbitrary metric.
contrived example:
If today it takes at least 1000(it takes way more probably) people to produce all the effects to hit a blockbuster level of quality ( think marvel type movie ), if AI can drop that number to 500 a disney level company would probably either;
A) produce 1 even bigger movie with bigger effects with the same # of people or
B) produce 2 current level productions for the same cost ( both in terms of money and people)
if you agree that this is a probable outcome then it would stand to reason that there would be more smaller projects that would become available because the base level of quality & returns can be met with a smaller team that can produce higher quality things. This would apply not only to art but to many other things.
BeGood9000 t1_je8ojes wrote
Reply to comment by fool_on_a_hill in What are the so-called 'jobs' that AI will create? by thecatneverlies
The idea is if we get AGI it will solve manufacturing Robotos that are good
ML4Bratwurst t1_je8ojdw wrote
Maybe take a look into microdosing :)
thecatneverlies OP t1_je8qy00 wrote
Reply to comment by [deleted] in What are the so-called 'jobs' that AI will create? by thecatneverlies
Not quite. If AI has rules which make them ethical then the human advantage would be taking an unethical path. For instance an accountant might have some creative 'not entirely by the books' way of avoiding taxes, would an AI be content doing the same?