Recent comments in /f/MachineLearning

Formal_Overall t1_jdhf32f wrote

i like that openai has partnered with select companies to make sure that they have plugins from the getgo, and then also put development of plugins behind a waitlist, ensuring that select hand-chosen companies can corner their market. very cool, very open and ethical of them

1

LeN3rd t1_jdhe9qb wrote

What language/suite are you using? You can take a look at profilers in your language. I know Tensorflow has some profiling tools and you can look at what operations are running on what device. Probably Torch has some as well. If its more esoteric, just use general language profilers and take a look at what your code is doing most of the time.

2

ConsiderationDry7153 t1_jdhd04u wrote

Maybe, maybe not, we cannot say for sure. Even if they haven't, I think that there is nothing you can do that could really help your case. But this is only my personal opinion.

I am in a similar position: no response from any of the three reviewers. But I think that it does not have to be a bad news: maybe they do not need more details to take their decisions.

You have to remember that maybe they are not that interested by you research so they will just ask enough question to have a global view and not more.

1

Kaasfee t1_jdhcnlf wrote

Im trying to train yolov7 to detect football(european one) players and the ball. In a typical frame there are lots of players and only one ball. After training it only detects the players. My guess is that it learned to ignore guessing the ball since its statistically irrelevant. Is this assumption correct, and if so how would I go about changing it?

1

BinarySplit t1_jdh9zu6 wrote

GPT-4 is potentially missing a vital feature to take this one step further: Visual Grounding - the ability to say where inside an image a specific element is, e.g. if the model wants to click a button, what X,Y position on the screen does that translate to?

Other MLLMs have it though, e.g. One-For-All. I guess it's only a matter of time before we can get MLLMs to provide a layer of automation over desktop applications...

204

nerdimite t1_jdh7u44 wrote

Regardless of whether this is AGI or not seems irrelevant as long as it can demonstrate the capabilities or simulate intelligent behaviour. Also what is AGI if not "artificial" intelligence not real or true intelligence per se. We are trying to compare human intelligence with AI. But if two things demonstrate similar intelligent properties regardless of how, it can still be called sorta intelligent. Intelligence itself is a very subjective and philosophical term. At this point in technology, my opinion is that it shouldn't matter what and what is not AGI coz there's no way to measure that right now that everyone agrees on, as long as it demonstrates some form of "artificial" intelligence.

1

utopiah t1_jdh7hxy wrote

Reply to comment by sEi_ in [N] ChatGPT plugins by Singularian2501

Thanks but that only clarifies from the UX side, we don't know know if OpenAI does save them and could decide to include past sessions in some form, as a context even with the current model, do we?

1

Econophysicist1 t1_jdh6fac wrote

Right, emergent properties are the key and they cannot be predicted from what NLM are supposed to do or how they work, this why they are emergent. The only way to find out what properties well trained NLM have is to test experimentally as this paper did and other papers that are doing the same, as this one:
https://arxiv.org/abs/2302.02083#:~:text=Theory%20of%20Mind%20May%20Have%20Spontaneously%20Emerged%20in%20Large%20Language%20Models,-Michal%20Kosinski

15