Recent comments in /f/MachineLearning

DiscussionGrouchy322 t1_jdmrq88 wrote

Wow so many words to try and say you're applying test driven design to prompt engineering. I will keep this as example of how not to write technical content. (I was reading the "blog post")

Maybe this is a joke posting that was also written by the chat gpt.

When you make those charts with the weights and things... Are they meant to convey information or do you just follow previous template where you saw information presented that way and you just try and match the shape?

8

Anis_Mekacher OP t1_jdmrjpc wrote

That's a great idea. Is it something like a bi/weekly meeting where you get to explain the main concepts and ideas behind a paper in a short amount of time?

It won't work in my case, because my current job is more or less in the cybersecurity field and not a lot of people in my company are interested in AI or its developments.

4

artsybashev t1_jdmpwwd wrote

The fluffy overly complex writing around your main message has worked as a barrier or prefilter to filter out bad job candidates or unqualified contributions to scientific discussion. LLMs are destroying this part. Interesting to see what this leads to.

14

alexmin93 t1_jdmocbw wrote

The problem is that LLMs aren't capable to make decisions. While GPT-4 can chat almost like a sentient being, it's not sentient at all. It's not able to coprehend the limitations of it's knowledge and capabilities. It's extremely hard to make it call an API to ask for more context. There's no way it will be good at using a computer like a user. It can predict what wappens if you do something but it won't be able to take some action. It's a dataset limitation mostly, it's relatively easy to train language models as there's almost infinite ammount of text on the Internet. But are there any condition-action kind of datasets? You'd need to observe human behavior for millenias (or install some tracker software on thousands of workstations and observe users behavior for years)

2

sweatierorc t1_jdmkacg wrote

Sure, humans under 40 are also very resistant to cancer. My point was that cancer comes with old age, and aging seems to be a way for us to die before cancer or dementia kill us. There are "weak" evidence that people who have dementia are less likely to get a cancer. I understand that some mammals like whales or elephant seems to be very resistant to cancer, but if we were to double or triple their average life expectancy, other disease may become more prevalent, maybe even cancer.

1

MarmonRzohr t1_jdmj8th wrote

>There are complex mammals that effectively don't get cancer

You got a source for that ?

That's not true at all according everything I know, but maybe what I know is outdated.

AFAIK there are only mammals that seem to develop cancer much less than they should - namely large mamals like whales. Other than that every animal above and including Cnidaria deveop tumors. E.g. even the famously immortal Hydras develop tumors over time.

That's what makes cancer so tricky. There is good chance that far, far back in evolution there was a selection between longevity and rate of change or something else. Therefore may be nothing we can do to prevent cancer and can only hope for suppression / cures when / if it happens.

Again, this may be outdated.

1

RiotSia t1_jdmhn6h wrote

Hey,

I got the 7B llama model running on my machine. Now I want it to analyze a large text for me (a pdf file) like hamata.ai does. How can I do it ? Does any one has like a site with resources on how I can learn to do that or even tell me?

1