Recent comments in /f/MachineLearning

1azytux OP t1_jd2ho88 wrote

i'm looking for ideas based on the papers given :
- Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering

- Multimodal Chain-of-Thought Reasoning in Language Models

and such .. with general chain of thought idea for language can be looked at this paper.

I'm not sure if the link you provided will work, but as it's huge I might have missed (I've glanced on it) can you point out the parts which you think should be paid attention?

1

Leo_D517 OP t1_jd2hhov wrote

First of all, we have noticed this issue and it will be resolved in the upcoming next version. For now, you can install by compiling the source code.

Please follow the steps in the Document to compile the source code.

The steps are as follows:

  1. Installing dependencies on macOS
    Install Command Line Tools for Xcode. Even if you install Xcode from the app store you must configure command-line compilation by running:
    xcode-select --install
  2. Python setup:
    $ python setup.py build
    $ python setup.py install
7

fanjink t1_jd2ghpk wrote

This library looks great, but I get this:
OSError: dlopen(/Users/***/opt/anaconda3/envs/audio/lib/python3.9/site-packages/audioflux/lib/libaudioflux.dylib, 0x0006): tried: '/Users/***/opt/anaconda3/envs/audio/lib/python3.9/site-packages/audioflux/lib/libaudioflux.dylib' (mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e)))

3

Leo_D517 OP t1_jd2g6pg wrote

First, librosa is a very good audio feature library.

The difference between audioflux and librosa is that:

  • Systematic and multi-dimensional feature extraction and combination can be flexibly used for various task research and analysis.
  • High performance, core part C implementation, FFT hardware acceleration based on different platforms, convenient for large-scale data feature extraction.
  • It supports the mobile end and meets the real-time calculation of audio stream at the mobile end.

Our team wants to do audio MIR related business at mobile end, all operations of feature extraction must be fast and cross-platform support for the mobile end.

For training, we used the librosa method to extract CQT-related features at that time. It took about 3 hours for 10000 sample data, which was really slow.

Here is a simple performance comparison

Server hardware:

- CPU: AMD Ryzen Threadripper 3970X 32-Core Processor
- Memory: 128GB

Each sample data is 128ms(sampling rate: 32000, data length: 4096).

The total time it takes to extract features from 1000 sample data.

Package audioFlux librosa pyAudioAnalysis python_speech_features
Mel 0.777s 2.967s -- --
MFCC 0.797s 2.963s 0.805s 2.150s
CQT 5.743s 21.477s -- --
Chroma 0.155s 2.174s 1.287s --

Finally, audioflux has been developed for about half a year, and open source has only been more than two months. There must be some deficiencies and improvements. The team will continue to work hard to listen to community opinions and feedback.

Thank you for your participation and support. We hope that the follow-up of the project will be better and better.

46

Nikelui t1_jd2eapi wrote

>People are going to consult with tools like ChatGPT about their mental health anyway regardless of what he does, people are already doing it with Google so why not ChatGPT that can actually talk to you, remember what you said etc.

Because that's outside the scope of both Google and chatGPT. If you are marketing your tool as a therapist aid and you don't have a license, you are probably breaking more laws than you can afford to.

2

Nezarah t1_jd297zo wrote

Is this essentially In-Context Learning?

You condense additional knowledge as a prefix to the prompt as “context” so that the question/input can use that information to create a more accurate/useful output?

1

TimelySuccess7537 t1_jd27pk3 wrote

Looks like such tools will eventually exist and be widely used, it's inevitable. Whether you are the one to succeed doing that, that's a matter of ambition, market fit, luck etc. It's not clear the people are ready for this now but they will be eventually.

Good luck!

1

TimelySuccess7537 t1_jd27klp wrote

How so ?

He can make the users sign some waiver. People are going to consult with tools like ChatGPT about their mental health anyway regardless of what he does, people are already doing it with Google so why not ChatGPT that can actually talk to you, remember what you said etc.

Sure this thing needs to be tested thoroughly but I really don't see why everyone is so outraged about this - psychotherapy is expensive and is not a right fit for everyone, maybe these tools can help people.

If some psychologist tested this app you would be cool with it? I'm sure some psychologist will eventually vouch for such a tool.

btw actual psychotherapy is not only expensive but ineffective way too often https://www.psychreg.org/why-most-psychotherapies-equally-ineffective/

1