It's refreshing to see someone like Terence Tao embrace GPT-4 as an assistant.
A few friends of mine (who are ML practitioners!) still don't trust LLMs or find them useful in their day-to-day work.
I've noticed a similar trend among folks who work on compilers, preferring to "stay in their lane" instead of embracing GPT-4. The opposite appears true among those who work on user-facing applications, adoption of LLMs is much higher in their day-to-day.
An observation I've made is that people that were early adopters and effective users of web search such as AltaVista or these days Google are also good at using LLMs.
Almost every criticism of generative chat AI that I've seen in forums is someone holding it wrong, the direct equivalent of troubleshooting by searching for the term "my PC crashed" instead of a specific error code or whatever.
Similarly, the people complaining about ChatGPT not being very good don't realise they're using the free-tier v3.5 instead of the paid GPT4 that's much smarter. The equivalent is people who use the default browser (IE) and use the default search engine (Bing) and complain that "the Internet is not useful".
Just recently I had to review a bunch of legacy C# code, and I've found GPT4 to be enormously useful. It can find bugs and security issues in seconds, will suggest fixes to them, explain why they're bad, etc...
Note that I don't have to trust it to write security-bug-free code! I'm asking it to find bugs that I will then fix myself.
[flagged]
There is always that kid in the class that Googles the question and then prints out the first page as their assignment submission.
Which in no way supports or defends or justifies any of your ignorant fallacious attacks on science, tired false equivalencies about religion, and nasty ad-hominem attacks on scientists.
Plus it totally contradicts you assertions above about the usefulness of Google and ChatGPT. Flip, flop, flip, flop!
You could at least have the integrity and intellectual honesty to reply directly to the actual real live scientist, a professional quantum physicist, who took the time to correctly rebut your wild conspiracy theories and gross misunderstandings of science, but you chose not to, and the crickets are still chirping away, awaiting your response.
But your lack of a response to him says so much more about you than your response possibly could (short of an admission that you are wrong and an apology for attacking scientists and a promise to do better next time).
Do you see the irony? Nope. You just contradicted yourself and flip-flopped yet again, reflexively clinging to your anti-scientific conspiracy theories even tighter, as revenge for being called out with the facts by a professional scientist, just like any dime-a-dozen foaming-at-the-mouth Q-Anon freak would. That only makes you even more wrong and ridiculous looking, and doesn't hurt or fool anybody else.
Look at your own obsessive posts.
"what’s surprising is that physicists seem to be O.K. with not understanding the most important theory they have."
Sean Carroll (2019): https://www.nytimes.com/2019/09/07/opinion/sunday/quantum-ph...
I don't think the guy you are at war with deserves so much attention and energy on your part. Not only is the point he's making not that different from what I linked above, although I reckon he gave it an arrogant spin, but the way you react somehow vindicates the parallels he established between science and religion. Not sure how Q-Anon is related to all of this.
Could you tell me some typical uses beyond just asking for code snippets? I'm an NLP researcher, and currently I still find myself using Google for other types of information seeking. I'm working on NLG decoding algorithms currently.
I see a lot of people saying it's replaced some large fraction of their search usage, but they generally don't explain what type of queries they're making.
I’ll often ask GPT to explain a concept to me, but giving it a hint as to my existing level of knowledge in the subject area. With Google, you’re likely to get the most popular page on a topic; not the page that is appropriate for people with your expertise.
For instance, if I want to know about extreme ultraviolet lithography and I tell GPT-4 that I have a degree in engineering and that I studied advanced optics, the explanation is much richer in useful detail, going way beyond what any of the pages I could find on Google would reveal.
Hmm, I tried some questions which would have been relevant to me in the recent past, and it flubbed pretty hard on all of them. Even worse, instead of just saying it doesn't know it generates semi-plausible babbling or adjacent but not actually helpful knowledge.
I decided to give it another go and ask GPT-4 three questions which I needed to get answers to within the last few months.
Asking conceptually about DPO: https://chat.openai.com/share/6611454c-60de-4317-811b-2b7f31... - In this one it completely leaves out the actual trick which enables DPO, so I would say it has almost no information content. Someone who didn't know what DPO is and read this would incorrectly think that they had learned something. - To learn about this, the right place was to read the original DPO paper, and some follow-up work
Asking about FSDP compatibility with LoRA: https://chat.openai.com/share/5f8892ea-61e6-496f-abda-d5a8ad... - In this one it just says a bunch of generic vague things without answering the question. - The right place to learn the answer to this is diving through Github issue comments
Asking for details the MegaBlocks mixture-of-experts setup: https://chat.openai.com/share/c010e630-ba08-407e-afb3-03df99... - Again it's just saying generic stuff which is relevant to mixture-of-experts in general, but it leaves out everything that actually makes the MegaBlocks MoE different from a generic MoE idea - For this one I had to do a combination of reading the paper and the MegaBlocks repo
So 0/3 and pretty dramatically. I was actually expecting it to get at least one of those. As far as I can tell, it didn't really do anything different based on me specifying my background either. I'd love to see any links to productive conversations that people can share.
That’s an area where they genuinely suck, unless there is an exact wikipedia article on the topic, readily within its training data set.
If you wander off of that knowledge sphere, and you are not sufficiently knowledgeable yourself about the topic, it can tell you some really stupid stuff.
Nonetheless, I do use it quite regularly in everyday life, as it is basically the best reverse dictionary (that works for any language) there is. For work (programming), I didn’t find a better use case than sometimes passing it a list of stuff, giving it an example on the first element on what I want, and using it to generate it for the rest of the elements.
But that is <1% of what I do each day.
Reverse dictionary is a great one yeah. I think I've used it for resolving a tip-of-my tongue thing several times. For language learning I'm a bit more wary. A few times I've asked it for help with Chinese, and when I ask one of my native speaker friends they'll tell me it wasn't quite right.
Imagine you had a coworker who was like Leonardo Da Vinci and Einstein and someone with the power to access the hive mind of Reddit wrapped up in one but had dementia and sometimes spoke believably while completely hallucinating and lying, so you still needed to verify their work or take their answer and double check it.
Now imagine they have zero problem with you interrupting them to ask the stupidest question or the most profound, difficult question, as long as you want, and if you don't like their answer you can tell them to try again. And they don't care how often you do either.
What kinds of questions might you ask that coworker in the course of your work? It's highly individual.
> Leonardo Da Vinci and Einstein
Then it would be able to logically reason, which it absolutely can’t do.
It’s a next generation search engine, which is very good at language-related tasks (and translating a python code it has in its training set to your language is a language task, that’s why it can be applicable to certain programming tasks).
It can say "let's think this through step by step", which is good enough for most cases.
I posted a few example questions (which are definitely way below Einstein level) in another comment. It's not doing too well! Do you have any links to conversations where it was particularly helpful? I do wonder if my GPT usage is bad in the same way my parents' Google usage is.
Maybe try asking your questions on phind.com. It does RAG on top of GPT4 so it might be able to base its answers on the paper or github issue you mentioned.
Well, Einstein and Da Vinci were fallible as well.
Asking for code snippets has replaced most of my Stack Overflow usage. I'd say 90% of the cases, it answers something usable. Maybe it's not relevant for an NLP researchers, but for a regular SWE like me, it's handy several times per day.
Completely agreed on this one, although it's maybe more like 70% for me. I did leave an exception for that specifically in my original comment.
Miscellaneous stuff from yesterday:
- Copy pasted terms & conditions of a website into GPT-4 context and asked it to find the answer to a question I had.
- Copy pasted a law into GPT-4 context to check if some activity was not illegal. Hallucinations can be avoided by asking it to recite the relevant lines, then you can CTRL+F for it to double check.
- Research medical condition to point me in a vague direction
- Create shell script and desktop shortcut with custom png, to launch an application with certain parameters
- Ask perplexity.ai for the best github library for a thing I wanted to do
- Upload github library to https://app.getonboardai.com/chat and ask it questions.
- A handful of smaller things. Any error message or blocker goes straight into GPT-4.
Just today, I took a picture of some medication my wife got because I wanted to know more about it, I also used it to lookup what Tesla probably did wrong with their CyberTruck that was causing the rusting (they likely didn't use passivated stainless), I also looked up the recommended age to give popcorn to kids since my toddlers received a small sealed bag of it for valentines. I pretty much use it all the time as a faster google. Whenever I wonder about something and want to know more, bam ChatGPT.
try asking it, reword it a few times.
> I've noticed a similar trend among folks who work on compilers, preferring to "stay in their lane" instead of embracing GPT-4
Research into ML-guided optimisation predates chatjippity.
http://chatjippity.com/ is taken, btw
chatjippity what?
It is rather annoying because anyone who spends 15 minutes with the tool given the right context can easily learn its limitations and conclude that it’s a useful albeit imperfect tool. At this point not using an LLM in your day to day is like not using Google. Sure you can do it, but are you going to be outputting work as efficiently as possible? Wouldn’t you feel like doing your job without Google is hamstringing you a bit?
I definitely leave a lot on the table in terms of outputting work as efficiently as possible. In the past I’ve used personal wikis, time tracking software, personal project management software, etc. and it was a boon for my productivity. These days I don’t bother. I’m sure some people view chatbots the same way.
TBH my views on Google (and search engines) in general have also changed over time. The information available on the internet is surprisingly shallow. Anything related to proprietary tech, hardware, geographically local info, reverse engineering or just anything that relatively few people are working on is very hard to find.
It's not about not trusting it or "staying in your lane", LLM results for domain specific work are terrible, and additionally useless if you're writing fairly novel code (which most ML practitioners/Compiler programmers are doing if they're doing interesting work).
The usefulness of LLMs for writing code is strongly correlated with how "Stack Overflowable" your work is.