Recent comments in /f/MachineLearning

Exarctus t1_jcfmqqs wrote

I think you’ve entirely misunderstood what PyTorch is and how it functions.

PyTorch is a front-end to libtorch, which is the C++ backend. Libtorch itself is a wrapper to various highly optimised libraries as well as CUDA implementations of specific ops. Virtually nothing computationally expensive is done on the python layer.

5

Capital-Duty-744 t1_jcfidsx wrote

What are the most important concepts that I need to know for ML? Possible courses are below:
Algebra & Calculus II
Algebra & Calculus III
Bayesian Stats
Probability
Multivariate stats analysis
Stochastic processes
Time series
Statistical inference

To what extent should I know and be familiar with linear algebra?

2

suflaj t1_jcfbkxq wrote

It took more than 6 years from zero, because to reach GPT4 you had to develop transformers and all the GPTs before 4... The actual difference between ChatGPT and GPT4 is apparently in the data and some policies that regulate when it is allowed to answer (which are still incomplete). This is not remarkable.

I AGAIN fail to see how this relates to previous comments.

0

namey-name-name OP t1_jcf601z wrote

I just hope that whatever regulations Congress choose to implement actually end up being effective at promoting ethics while not crushing the field. After seeing Congress question Zuckerburg, I can’t say I have 100% faith in them. But I’m willing to be optimistic that they’ll be able to do a good job, especially since I believe that regulating AI has largely bipartisan support.

1

LanchestersLaw t1_jcf5x9c wrote

I think the most similar historic example is the human genome project where the government and private industry where both racing to be the first to fully decode the human genome but the US government was releasing its data and industry could use it to get even further ahead.

Its the classic prisoners dilemma. If both parties are secretive research is much slower and might never complete but with a small probability of completing the project first for a high private reward for the owner and low reward for society. If one party shares and the other does not, the withholding party gets a huge comparative boost for a high probability of a high private reward. If both parties share we have the best case with parties being able to split the work and share insights so less time is wasted for a very high probability of a high private and high public reward.

I think for AI we need mutual cooperation and to stop seeing ourselves as rivals. The rewards for AI cannot be privatized for the shared mutual good of humanity in general (“Humanity” regrettably does include Google and the spider piloting Zuckerberg’s body). Mutual beneficial agreement with enforceable punishment for contract breakers is what we need to defuse tensions, not an escalation of tensions.

3