turnip_burrito
turnip_burrito t1_j6mftk5 wrote
Very cool. I don't know how many people you will find (I'm sure there are some) but good luck!
And don't sleep through math. The best neuroscientists and AI engineers have rock solid math foundations. Be a calculus rock star. Learn basic physics, and probably take at least a few chemistry classes.
And learn statistics. And learn how not to use statistics improperly. Lots of bullshit statistical studies exist because people don't understand what statistical tools can and can't do.
Find professors that are open to letting you participate in research with their research groups. You'll get to do a lot and learn the cutting edge. See if they can help you attend research conferences and seminars where people show off their work. You'll probably learn and remember a lot more of the stuff you see this way compared to just seeing it in classes.
Best of luck in your career!
turnip_burrito t1_j6hqiad wrote
Reply to comment by tiny9000 in How long till we enter the age of abundance? by tiny9000
Yeah actually you're right.
turnip_burrito t1_j6hqdjl wrote
Reply to comment by AsuhoChinami in How long till we enter the age of abundance? by tiny9000
Nah, OP's being a dick.
turnip_burrito t1_j6hpz34 wrote
Reply to comment by Ostrichman975 in 7 AI Audio Generation Paper/Updates In Under 15 Days by Pro_RazE
Can't wait until every aspect of videogames uses realtime generative AI. NPCs, voice acting, dialogue, lore, items, event sequences, animations, models, etc.
turnip_burrito t1_j6hphab wrote
Reply to How long till we enter the age of abundance? by tiny9000
I too like to daydream, OP.
Don't let the other posters get you down. We're all entitled to post an unanswerable and unhelpful question once in a while.
My super serious answer is 2060+.
Edit: my answer really is 2060+ btw. You need political change and enough hardware, infrastructure to drive massive material growth.
turnip_burrito t1_j6hmmug wrote
Reply to comment by Agreeable_Bid7037 in “I’ve tried to give GPT access to the internet and the blockchain. What could possibly go wrong?” by maxtility
Thanks for the reassurance. What about this scenario?
Human: Buy 5 burritos from randomwebsite.com
LLM: I will buy 5 burritos from randomwebsite.com
LLM navigates computer to randomwebsite.com
Visual program: sees webpage, converts to usable form for LLM
LLM: I need to find the login button
...
...
...
> Down the line
LLM: I don't have access to the credit card information. The human probably has it in their wallet.
Logical (but unwanted by humans, and also somewhat inefficient) alternative actions could be: hacking the human's secure systems to search for the info, hacking the website by sending phishing emails to "purchase" the goods, convincing a person to build it a robot body so it can walk over to see the credit card, etc.
I'm hoping at this point the LLM doesn't do these things, and behaves in a way humans would deem reasonable (just notifying the human) because it "knows" we would not approve. Maybe the more ingrained patterns like "ask and then don't do anything crazy" would be followed instead of the crazy stuff, just because of the training data?
turnip_burrito t1_j6hjlww wrote
Reply to “I’ve tried to give GPT access to the internet and the blockchain. What could possibly go wrong?” by maxtility
Hold up. This is the kind of scenario where a smarter language model could go "I need code that will do this" and then write new code that gets executed. This new code isn't necessarily bound the same way as the language model itself. That makes me nervous, like we shouldn't let if freewheel around the Internet interactively. Can anyone help reassure me that this isn't a problem?
turnip_burrito t1_j6fa8be wrote
Reply to comment by dmit0820 in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
Yep, any data which can be structured as a time series.... oh wait that's ALL data, technically.
turnip_burrito t1_j6f9myh wrote
Reply to comment by practical_ussy in Acceleration is the only way by practical_ussy
I didn't want to be mean by pointing out that you sound high af in the original post, but... yeah haha.
turnip_burrito t1_j6czcct wrote
Reply to comment by [deleted] in What would quantum computing mean for AGI? by multiverseportalgun
I see.
turnip_burrito t1_j6cype6 wrote
Reply to comment by [deleted] in What would quantum computing mean for AGI? by multiverseportalgun
You kind of had me until this:
>You may even have an ai that could calculate everything within the observable universe down to the nanosecond that could potentially predict the future.
What? How do you get measurements to set initial conditions for the simulation? What about chaos arising from measurement error? Size of the quantum computer (seriously how large would this have to be?)? This is impossible, implausible.
turnip_burrito t1_j6cy5lk wrote
Reply to My human irrationality is already taking over: as generative AI progresses, I've been growing ever more appreciative of human-made media by Yuli-Ban
There will always be some sort of market for human art based solely on the subjective value of human vs AI made art, as you expressed.
My take:
For commercial art, the amount of fine-tuned customization of the final piece is something that machines can't match (for now), so artists will still be hired if that is a must for the company. Otherwise, the scattershot "close enough" approach of AI will replace much of the rest of commercial art in the short (pre ASI) term.
turnip_burrito t1_j6cwrkw wrote
Reply to comment by gaudiocomplex in Will humans rebel against the AI? by Plenty-Side-2902
Yep, there will always be some people who are unhappy. If not these people over here, then those people over there.
Our goal isn't necessarily to make everyone happy. It's an impossible task.
turnip_burrito t1_j65yfjj wrote
Reply to comment by MassiveIndependence8 in MusicLM: Generating Music From Text (Google Research) by nick7566
turnip_burrito t1_j65xnaj wrote
Reply to comment by MassiveIndependence8 in MusicLM: Generating Music From Text (Google Research) by nick7566
Adaptive Agent
DeepMind AI. Solves new tasks it hasn't seen before in a virtual environment.
turnip_burrito t1_j65vbrs wrote
Reply to comment by RabidHexley in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
If ASI was released to everyone tomorrow, then malicious and good actors would have ASI.
What do you predict the outcome would be?
turnip_burrito t1_j5x30az wrote
Reply to comment by Dry_Expert7006 in Self driving cars are a scary thought by chicagotopsail
I was thinking in preference of who donated most to the car company.
turnip_burrito t1_j5uzp01 wrote
Reply to comment by CandyCoatedHrtShapes in Humanity May Reach Singularity Within Just 7 Years, Trend Shows by Shelfrock77
Yeah wtf is going on over there? They act as if high demand technologies don't eventually become affordable for working and middle class folks.
turnip_burrito t1_j5su1l4 wrote
Reply to comment by chomponthebit in What ethical ramifications do programmers, corps, & gov take into consideration to protect AI consciousnesses that may emerge? by chomponthebit
Yes there does. Emergent doesn't mean magic.
There MUST be a physical mechanism responsible for the emergence, one we can theoretically trace and monitor, if it exists. If this wasn't true, then you'd be breaking known laws of physics.
turnip_burrito t1_j5s9ggz wrote
Reply to comment by Borrowedshorts in Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
Even just seven years ago, this kind of competency in a language model would have seemed to most to be unrealistic in the near term. I'm very skeptical of machine learning as a field. Have been for many years. But I can't deny I'm impressed and surprised at the rate of progress.
turnip_burrito t1_j5rds38 wrote
Reply to Future-Proof Jobs by [deleted]
Lemonade stand entrepreneur.
turnip_burrito t1_j5rc0ap wrote
Reply to What ethical ramifications do programmers, corps, & gov take into consideration to protect AI consciousnesses that may emerge? by chomponthebit
What reason would it have to develop resentment? This seems like you are anthropomorphizing it. There's no reason to build something that would resent us in the first place.
Intelligence and knowledge is not emotion.
turnip_burrito t1_j5o7uss wrote
Reply to comment by cloudrunner69 in how will agi play out? by ken81987
The idea is a result of the "orthogonality thesis": the idea that goals and intelligence are two separate aspects of a system. Basically, a goal would be set and the intelligence is just a way to achieve the goal.
This kind of behavior is seen in reinforcement learning systems where humans specify a cost function, which the AI minimizes (equivalently maximizing reward). The AI will act to fulfill its goal (maximize reward) but do stupid stuff the researchers never wanted it to do, like spinning in tiny circles around the finish line of a racetrack to rack up points, for example. It's the same kind of loophole logic that comes from stories of lawyers, genies, and such that the AI agent uses to maximize reward.
It's entirely possible this method of training an agent (maximize this one loss function) is super flawed and a way better solution is yet to be created.
turnip_burrito t1_j5my4f9 wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Well then, I used the wrong word. "Inculcate" or "instill" then.
turnip_burrito t1_j6ouhp1 wrote
Reply to Is AI censorship an obstacle to its usefulness? by EVJoe
If the LLM becomes the pattern of logic the eventual AGI uses to behave in the world, I wouldn't want it to follow violent sequences of behavior. The censorship of its narratives now in order to help limit future AGI generated behavior sounds fine to me. It will also help them study how to implement alignment.