ChronoPsyche
ChronoPsyche t1_izuor6o wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
That's completely irrelevant to the point I was making. Feel free to engage with what I was saying or make whatever point you are trying to make directly.
ChronoPsyche t1_izukaf2 wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I'm not sure how that is relevant at all to what I'm saying.
The technological singularity is a hypothetical future event that by its very nature is very difficult to predict. Anyone acting like they know exactly what's going to happen and thinks everyone else is stupid for not agreeing is speaking from a place of ignorance.
The smartest AI researchers and thinkers who are actually involved in advancing this technology are the ones speaking with the most uncertainty and restraint when making predictions. So I would advise you to keep that in mind before saying things like this:
>Pretty pathetic that this needs to be explained. We are dealing with some solid skulled individuals.
There is a lot we don't know about what will happen. Nobody knows everything, including the experts, so try to be a little less certain of your opinions and a little less hostile to other's opinions. Keep an open mind. Maybe you'll learn something.
ChronoPsyche t1_izugiu1 wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
You could learn a few things from this: https://en.m.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect#:~:text=The%20Dunning%E2%80%93Kruger%20effect%20is,overestimate%20their%20ability%20or%20knowledge.
ChronoPsyche t1_izhhj4b wrote
Reply to ChatGPT solves quantum gravity? by walkthroughwonder
The posts on this sub are becoming worse than clickbait tabloid news. I've about had it with people misunderstanding what chatGPT does and thinking it's some omniscient super intelligence.
ChronoPsyche t1_izdn9n4 wrote
Reply to I had a chat about time with Character.AI's version of LaMDA. It seems to think it's omniscient. by KHDTX13
It doesn't think anything about itself. It's playing a role that it was told to play.
ChronoPsyche t1_iz0my3r wrote
Reply to comment by Swimming_Gain_4989 in What do you do for a living and how can current AI tools help you be more successful? Let's share our ideas and start making use of these cool advances. by DungeonsAndDradis
Sometimes if I keep going back and forth with it over code that isn't working it will start to have an attitude, saying things like "Like I said", and then when that starts happening, the next time I say anything I get an error from OpenAI and have to reset the thread lmao.
ChronoPsyche t1_iyzqmrz wrote
Reply to What do you do for a living and how can current AI tools help you be more successful? Let's share our ideas and start making use of these cool advances. by DungeonsAndDradis
I'm a software engineer and game developer and I'm pretty much using ChatGPT as a replacement for stack overflow. It is a great tool to use whenever I'm stuck on something or have a quick question about syntax I can't remember.
It's definitely way faster and more accurate than searching my question on Google. Occasionally it will give me the wrong answer, but I'd say for the types of questions I ask, it has around a 90% accuracy rate.
And what's really cool is if I'm working on a hard problem, I can update ChatGPT with the status of my attempts to debug it and new problems I've run into and it will remember everything I've said previously and take that into account when answering my new but related questions. Super cool.
Feels more like a brainstorming partner than just a question-answerer.
ChronoPsyche t1_iyutt05 wrote
Reply to comment by Ok_Ganache_6570 in Took a break from asking about the limitations imposed on it to ask it a more pressing question. by not_into_that
Respectfully, you been under a rock?
ChronoPsyche t1_iytra7q wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
Well you can believe whatever you want but you're not basing those beliefs on anything substantive.
Honestly, the rate of progress since 2012 has been very slow. It's only in the past few years that things have picked up substantially and that was only because of recent breakthroughs with transformer models.
That's kind of how the history of AI progress has worked. We typically have breakthroughs that lead to a surge in progress that eventually plateaus and then stalls for a while as bottlenecks are reached and then eventually a new breakthrough is reached and there is another surge in progress.
It's not guaranteed there will be another plateau before AGI, but we're gonna need new breakthroughs to get there, because as I said, we are approaching bottlenecks with the current technology that will slow down the rate of progress.
That's not necessarily a bad thing, by the way. Our society isn't currently ready to handle AGI. It's good to have some time pass to actually integrate the new technology rather than developing it faster than we can even use it.
ChronoPsyche t1_iyp0h1z wrote
Reply to comment by Ribak145 in Have you updated your timelines following ChatGPT? by EntireContext
It's not even an evolution, it's just finetuning of GPT3 for a particular use case. Nothing ChatGPT does can't be done with regular GPT3. It just works differently out of the box. Meanwhile, there are many things regular GPT3 can do that ChatGPT can't do.
Regular GPT3 is like an operating system and ChatGPT is like an application running on that operating system.
ChronoPsyche t1_iyp084x wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
Eventually, but without any knowledge of specific breakthroughs that will happen very shortly, your 2025 estimation is an uninformed guess at best.
ChronoPsyche t1_iyp04j8 wrote
Reply to comment by EntireContext in Have you updated your timelines following ChatGPT? by EntireContext
It will increase but the size of increases will slow down without major breakthroughs. You can't predict the rate of future progesss solely based on the rate of past progress in the short term.
You guys take the "exponential growth" stuff way too seriously. All that refers to is technological growth over human history itself, but every time scale doesn't follow the exact same growth patterns. If they did we'd have already reached the singularity a long time ago.
Bottlenecks sometimes occur in the short term and the context-window problem is one such bottleneck.
Nobody doubts that we can solve it eventually, but we haven't solved it yet.
There are potential workarounds like using external memory systems, but that is only a partial workaround for enabling more modest context-window increases. External-memory systems are not feasible for AGI because they are way too slow and do not scale well dynamically, not to mention they are separate from the neural network itself.
In the end, we either need an algorithmic breakthroughs or quantum computers to solve the context-window problem as it relates to AGI. An algorithmic breakthrough is more likely to happen before quantum computers become viable. If it doesn't, then we may be waiting a long time for AGI.
Look into the concept of computational complexity if you want to better understand the issue we are dealing with here.
ChronoPsyche t1_iyozh1o wrote
I haven't changed my timeline at all given that ChatGPT is literally just GPT3 that has been fine tuned to be more conversational out of the box.
I think what's happening is a lot of people who haven't really explored the potential of GPT3 itself are now becoming aware of it since ChatGPT is free to use (for now) and easier to use.
Base GPT3 is still much more impressive as it is much more versatile. It can do everything ChatGPT can and a lot more. It just takes a little bit more set up work.
ChronoPsyche t1_iylwfak wrote
Reply to comment by cootiecatchers in Is my career soon to be nonexistent? by apyrexvision
Well that's assuming that the complexity of software doesn't scale up as things get easier. To me it seems that it absolutely would. Software engineering has been getting easier from the very start, yet the demand for software engineers has only been increasing, because the complexity of software has been increasing and the uses for software have exploded through the roof. I see no reason for this trend to change. The nature of the job will certainly change and someone can't expect to be doing the exact same thing they are doing today in ten years and make the same amount of money, but if they keep up with the tech, their skills will still be needed, unless we have AGI by then.
ChronoPsyche t1_iylms0f wrote
Reply to comment by turntable_server in Is my career soon to be nonexistent? by apyrexvision
>This will affect the outsourcing, but at the same time it will also create new types of jobs both home and abroad.
And that's really the thing. Software engineering as a discipline has always been a rapidly-changing thing. Now faster than ever, but it's been evolving at a disruptive pace ever since Fortran was developed a little over a half-century ago.
My grand-uncle was among the first software engineers using Fortran in the 1950s. Nowadays, he knows very little about the current state of software engineering. Mostly due to the choice of not keeping current with things, but just goes to show how fast the field has already been changing.
ChronoPsyche t1_iylg7zx wrote
Reply to comment by AsuhoChinami in Is my career soon to be nonexistent? by apyrexvision
Well now you know what I meant. No job is safe once/if the singularity happens, but whether the concept of jobs will even matter anymore at that point is anyone's guess. I'd wager it won't. Whether because we all are slaves to the master AI or living in Utopia, is the question.
ChronoPsyche t1_iylfryt wrote
Reply to comment by Superduperbals in Is my career soon to be nonexistent? by apyrexvision
Certainly. Read my other reply on this thread. Coding is not the same as software engineering. These are the general steps in the software development life cycle.
- Requirements Gathering
- Software Design
- Implementation
- Testing
- Integration
- Deployment
- Maintenance
Coding only applies to step #3. It's also the easiest step. Any professional software engineer will tell you this. In fact, a lot of coding jobs in developed countries are already outsourced out to cheap labor markets (reducing demand for coders domestically). Here in the US, for example, it's very common for software engineers to remotely collaborate with contract-to-hires from India to help speed up implementation.
In general it's very easy to train AI to program because of how many publicly available repos there are online to be trained on. In the end, though, those repos are mostly only for open-source software and personal projects. Commercial-grade applications usually have private repos that can't be trained on which limits the applicability of these tools and that is still just in the implementation step.
All the other steps are and will remain much more difficult for AI to accomplish because there are no datasets that perfectly encapsulate those processes that can be trained on. It will take AI with much more generalist capabilities in order to be anywhere near competent enough to entirely replace software engineers. We basically need competent AGI before we get to that point.
ChronoPsyche t1_iyldr7d wrote
Reply to comment by AsuhoChinami in Is my career soon to be nonexistent? by apyrexvision
True, it's a good thing that wasn't what I was arguing. I was pretty clearly talking about pre-singularity AI in the near/medium term. Once/if the singularity happens, all assumptions and predictions go out the window. There are just too many unknown variables to even begin to fathom what the status of our jobs will be, much less if the concept of jobs will even be relevant anymore.
By the way, AI doesn't do software engineering "less than perfect", it doesn't do it at all. What's being discussed here is programming snippets of code or very small programs. If you ask it how to make large, enterprise applications, it will give you general guidelines that you could get off Google and that's it.
Programming is to software engineering what construction is to civil engineering. The main difference is that software engineers also tend to be programmers, but programming is just a tool for building software, but knowing how to code doesn't mean you know anything about how to actually build commercial software applications.
EDIT:
It's so difficult for an AI to do, because there simply isn't enough training data for such a task. Besides the fact that most commercial-grade software applications don't have publicly available repos that can be trained on, there is so much more to software engineering that has almost nothing to train on.
How do you train an AI to engage in requirement gathering, requirements analysis, client communication, requirements-specific design and architecture, testing, deployment, maintenance, etc? These aren't things that are perfectly encapsulated in trainable datasets. It gets even iffier when we are talking about software that needs to follow any sort of regulations, especially safety regulations.
It will be possible eventually, but not until much more general capabilities such as human-level logical reasoning and communication are developed. Basically, software engineering is safe until we have competent AGI. The singularity comes not long after that. (And I say "competent" because nobody is replacing software engineers on large, enterprise-level software applications with AI that can poorly do the job).
ChronoPsyche t1_iykljnl wrote
Reply to comment by thePsychonautDad in Is my career soon to be nonexistent? by apyrexvision
The ability to query the internet will be a game changer for large language models in many ways.
ChronoPsyche t1_iyjk9ld wrote
Reply to comment by apyrexvision in Is my career soon to be nonexistent? by apyrexvision
Software engineer too. I tried to use ChatGPT to create a web application using a style library I've never used before. Took the code it gave me, plugged it in, and was given dozens of errors. Turns out ChatGPT was using a deprecated version of the library. I then had to go in and manually alter the code to match with the current syntax. By the time I was done doing that, I had basically learned the library from scratch the same as I would have without ChatGPT.
Our jobs are safe. While they surely will become more advanced, you always need someone who actually understands the code and understands the business requirements, at the end of the day. AI is just a tool, and as the tools get more advanced, the requirements will be come more complex and everything will balance out.
Just make sure you are always learning the latest tech and keeping on top of things. Even before large-language models, software engineering has always been a job that requires life-long learning. The people programming with punchcards probably though their jobs were gone. Those who kept on top of things still retained the necessary skills to grow with the technology.
You don't need to learn machine learning unless you want to create the tools yourself, but you do need to know how to use them.
ChronoPsyche t1_iybzgdg wrote
Reply to comment by giveuporfindaway in Will beautiful people lose most of their sexual market value in the coming decades? by giveuporfindaway
OP, real talk time. Those who have not struck the genetic lottery do have a little bit tougher of a time, but "ugly" people still get friends and partners. I know this because I see it all the time.
And I put ugly in quotations because few people walk around thinking "oh that's an ugly person". They're just a person and when you get to know someone, their personality is how you view them, not their appearance.
What seems to be your problem is not anything about how you look, but your personality. I don't know how you expect anyone to read a comment like this and come away thinking "wow, this is a really chill person, why is nobody their friend?"
If you act anything close to the way you are acting here in real life, I would seriously suggest working on that. I know you said no shrinks, but you're kinda inviting it.
By the way, I'm never invited to parties either and while I'm not a 10, I'm also not "ugly". I have a reserved/awkward personality and that has held me back in the social world. And a lot of those "ugly" people have a way better social life than me because they are a joy to be around. I know this and fully own it and am working on it. Looks really have less to do with things than you think.
ChronoPsyche t1_ix58csz wrote
Reply to comment by cypherl in is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
Nobody is suggesting dropping us to 200 ppm. The ideal CO2 concentration is considered to be between 280 (preindustrial levels) and low 300s. It would be absolutely safe to drop to those levels.
However, even if we stopped all carbon emissions immidiately, it would take thousands of years to return to those levels naturally.
That's not what "solving climate change" is about. It's about slowing the increase of carbon dioxide in the atmosphere to levels that are more manageable and to levels that we can more easily adapt to.
If we continue with the current level of emissions we will eventually hit a runaway effect where natural feedback loops are triggered and the effects of climate change accelerate to disastrous levels very quickly and become nearly impossible to stop. That is what we are trying to prevent by lowering emission levels.
No scientists actually believe we can turn back warming in the near and medium term future. That ship sailed long ago. So don't worry, if you aren't living on a glacier right now you won't be in the future either.
ChronoPsyche t1_ix54iau wrote
Reply to comment by cypherl in is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
The issue with modern climate change is how fast it is happening compared to natural climate change. It is simply occurring too fast for humans to properly adapt. It is occuring at an exponential rate similar to the singularity, actually, and once we reach the point of no return feedback loops will happen where shit will get real, real fast.
As far as living on a glacier, can't tell if you're serious or not. Solving climate change doesn't mean cooling the planet, it means preventing the warming from getting out of control.
ChronoPsyche t1_ix2s0tk wrote
Reply to comment by HeinrichTheWolf_17 in is it ignorant for me to constantly have the singularity in my mind when discussing the future/issues of the future? by blxoom
I was responding to OP who was basically making the point that we don't need to worry about climate change because AGI will be trillions of times smarter than humans by 2050 (paraphrased). My point was that we do need to worry about it and do something about it with whatever methods we have available (which right now is mainly just limiting emissions and transitioning to a green economy) rather than just assuming the Singularity will save us. I don't disagree with what you said, it just is misinterpreting what the point of my comment is. You're right that I was talking about AGI, because thats what OP was talking about.
I dont blame you if you didn't read his post though. It was a bit of a mess.
ChronoPsyche t1_izuuwkq wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I'll entertain this rhetorical game you are playing but I will mention that it's generally frowned upon to not engage with the conversation at hand.
Why do we have computers? We have computers because Alan Turing wanted to answer the question presented in Kurt Godel's Incompleteness Theorem of whether or not there exists any statements made within a formal system of logic that cannot be proven either true or false by that system. In other words, he was trying to answer the famous question of "is Mathematics decidable"?
So Alan Turing created the concept of a Turing Machine, a theoretical device that used algorithms that could compute any problem that was decidable. He then formulated a proof that showed that there is a problem that cannot be proven as true or false by a Turing Machine, a problem called "the Halting Problem".
The halting problem is simply a question of whether or not there is an algorithm that can be run on a Turing Machine that can determine with certainty whether or not a given program would run forever or eventually reach an answer, no matter how long it may take.
Alan Turing proved mathematically that a Turing Machine would be unable to definitively answer such a question in every single scenario, and thus, proved (a proof by contradiction) that mathematics as a system of formal logic was undecidable. In other words, there are some statements made within a formal system of logic that cannot be proven true or false by that system. Usually these are problems that have to do with self-reference.
So in the process of formulating this proof, Alan Turing essentially and accidentally invented the theoretical foundation of computer science.
TL;DR
So to answer your question, we have computers because an English scientist accidentally invented the theoretical foundation of computer science while trying to answer a question about mathematics.
The second reason we have computers is because of World War 2. Much of Alan Turing's research was funded by the British government in their effort to decipher the German Enigma Machine. That research directly led to the invention of the first actual Turing-Complete computer, known as Eniac in 1945, which itself was developed for the United States Army to calculate artillery tables, and then used in the development of the first nuclear bomb to speed up calculations which were previously being done by hand.
If you want an oversimplified answer, computers were invented to help us perform calculations faster.
How any of this is relevant to my initial comment, I still do not know.