Recent comments in /f/MachineLearning
daugaard47 t1_jdkkyds wrote
Reply to comment by signed7 in [N] ChatGPT plugins by Singularian2501
Wish they would have stayed open source, but can understand why they would sell out. There would have been no way they could handle the amount of traffic/need if they would have remained a non-profit. But as someone who works for a non-profit, I don't understand how they legally changed to a for-profit over a weeks time period. 😐
Nyanraltotlapun t1_jdkkc6q wrote
Reply to comment by 3deal in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
There is no way for humans to adapt for alien intelligence. The idea of developing general AI is insanely horrifying from the beginning.
daugaard47 t1_jdkk9m7 wrote
Reply to comment by sEi_ in [N] ChatGPT plugins by Singularian2501
I'm a subscriber for the + plan and joined the wait list on day one and no access as of now to the plugs. 😑
DriftingKing t1_jdkj88z wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
The paper literally showed theory of mind wtf? Why are you blatantly making stuff up and getting upvoted.
rePAN6517 t1_jdkinrg wrote
Reply to comment by nixed9 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
> This is quite literally what we hope for/deeply fear at /r/singularity
That sub is a cesspool of unthinking starry-eyed singularity fanbois that worship it like a religion.
JohnFatherJohn t1_jdkik7r wrote
Reply to comment by SatoshiNotMe in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
It's not available to the public yet, restricted to specific groups that are conducting research.
3deal t1_jdkiao9 wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
AI is growing faster than our capacity to adapt. We are doomed
kromem t1_jdkfj5w wrote
> The model underlying Dolly only has 6 billion parameters, compared to 175 billion in GPT-3, and is two years old, making it particularly surprising that it works so well. This suggests that much of the qualitative gains in state-of-the-art models like ChatGPT may owe to focused corpuses of instruction-following training data, rather than larger or better-tuned base models.
The exciting thing here is the idea that progress in language models is partially contagious backwards to earlier ones by using newer models to generate the data to update older ones not in pre-training but in fine tuning (and I expect, based on recent research into in context learning, this would extend into additional few shot prompting).
I'm increasingly wondering if we'll see LLMs develop into rolling releases, particularly in the public sector. Possibly with emphasis on curating the data set for fine tuning with a platform agnostic stance towards the underlying pre-trained model powering it.
In any case, it looks more and more like the AI war between large firms will trickle down into open alternatives whether they'd like it to or not.
__Maximum__ t1_jdkepie wrote
Reply to comment by mxby7e in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Also, it's very shady for a company called OpenAI. They claimed they became for profit because they needed the money to grow, but these restrictions just show that they are filthy liars and only care about keeping the power and making profit. I'm sure they already have a strategy going around that 30B cap, just like they planned stealing money and talent by calling themselves non-profit first.
SatoshiNotMe t1_jdke4cu wrote
Reply to comment by JohnFatherJohn in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
How do you input images to GPT4? Via the API?
nicku_a OP t1_jdkdxy8 wrote
Reply to comment by LifeScientist123 in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
Good question! So what we’re doing here is not specifically applying evolutionary algorithms instead of RL. We’re applying evolutionary algorithms as a method of HPO, while still using RL to learn and it’s advantages. Take a look at my other comments explaining how this works, and check out the docs for more information.
__Maximum__ t1_jdkdtp2 wrote
ClosedAI is feeding off of our data. If we start using/supporting Open Assistant instead, it will beat chatgpt in a month or two.
farmingvillein t1_jdkdjye wrote
Reply to comment by SatoshiNotMe in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
> I see it’s multimodal but how do I use it with images?
You unfortunately can't right now--the image handling is not publicly available, although supposedly the model is capable.
SatoshiNotMe t1_jdkd8l5 wrote
Reply to comment by farmingvillein in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
I’m curious about this as well. I see it’s multimodal but how do I use it with images? The ChatGPTplus interface clearly does not handle images. Does the API handle image?
nicku_a OP t1_jdkd7qf wrote
Reply to comment by Modruc in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
We’ve also shown that using these libraries as-they-come is far slower for real problems than what we can offer!
simmol t1_jdkd4pf wrote
Reply to comment by MyPetGoat in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
Seems quite inefficient though. Can't GPT just access the HTML or other type of codes associated with the website and just access the websites via the text as opposed to image?
nicku_a OP t1_jdkd2ax wrote
Reply to comment by Modruc in [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
Libraries like stable baselines/rl zoo are actually quite inflexible and hard to fit to your own problem. We’re introducing (with plans to add way more!) RL algorithms that you can use, edit and tune to your specific needs faster and more flexibly.
DragonForg t1_jdkb8w9 wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
This is fundamentally false. Here is why.
In order to prove something and then prove it incorrect you need distinct guidelines. Take gravity their are plenty of equations, plenty of experiments etc. We know what it looks like, mathematically what it is and so on. So if we take a computation version of gravity we have a reliable comparison method to do so. Someone can say this games gravity doesn't match with ours as we have distinct proofs for why it doesn't.
However what we are trying to prove/disprove is something we have 0 BAISIS ON. We barely understand the brain, or consciousness or why things emerge the way we do, we are no where near close enough to make strict definitions of theory of mind or creativity. The only comparison is if it mimics ours the most.
Stating it doesnt follow my version of theory of mind is ridiculous its the same as saying my God is real and yours isn't, your baises of why we have creativity is not based on a distinct proved definition but rather an interpretation of your experiences studying/learning it.
Basically our mind is a black box too, we only know what comes out not what happens inside. If both machine and human get the same output and the same input, it legitimately doesnt matter what happens inside. Until we either can PROVE how the brain works to exact definitions. Until then input and output data is sufficient enough for a proof otherwise AI will literally kill us because we keep obsessing over these definitive answers.
It's like saying nukes can't do this or that. Instead of focusing on the fact that nuclear weapon can destroy all of humanity. The power of these tools just like nuclear weapons shouldn't be understated because of semantics.
Modruc t1_jdk903e wrote
Reply to [P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up by nicku_a
Great project! One question though, is there any reason why you are not using existing RL models instead of creating your own, such as stable baselines?
SWESWESWEh t1_jdk8rtn wrote
Reply to comment by machineko in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
Doing the lords work my friend. Does it work with Apple Silicon Metal shaders? I've trained my own models as both TF and pytorch support it but I've noticed a lot of people use cuda only methods which makes it hard to use open source stuff
MyPetGoat t1_jdk8icb wrote
Reply to comment by simmol in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
You’d need the model to be running all the time observing what you’re doing on the computer. Could be done
MyPetGoat t1_jdk8b7t wrote
Reply to comment by Puzzleheaded_Acadia1 in [D] I just realised: GPT-4 with image input can interpret any computer screen, any userinterface and any combination of them. by Balance-
How big is the training set? I’ve found small ones can generate gibberish
InitialCreature t1_jdk7ttk wrote
Reply to comment by laisko in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
haha that's pretty funny
MjrK t1_jdk4ig1 wrote
Reply to comment by Esquyvren in [R] Hello Dolly: Democratizing the magic of ChatGPT with open models by austintackaberry
For demonstration and research, not widely nor generally.
3deal t1_jdkmcrb wrote
Reply to comment by Nyanraltotlapun in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
We all know the issue, and we still running on the way.