@serious_mehta
Tanay Mehta
1 year
When you have no clue what LLMs actually are and what we mean by parameters but you still tweet —
@danapke
Daniel Apke
1 year
@nhutter28 Here is a scary look at where we are vs the knowledge GPT 4 will have.
Tweet media one
143
464
3K
13
8
184

Replies

@averma12
Abhinav Verma
1 year
@serious_mehta Can people please report this circle pic.
0
0
2
@nezubn
Ankit Sharma
1 year
0
0
6
@serious_mehta If we increase the parameters then GPT-7 might have more knowledge than all humans combined. Scientists won't have to reserch anything, they can just ask GPT 7 😂
1
0
7
@marccodess
Marc
1 year
@serious_mehta The way he is responding to people correcting him is comical.
0
0
2
@kanpuriyanawab
Anshuman Mishra (e/ia)
1 year
@serious_mehta If "Mai expert mujhe sab aata" had a tweet form 😂😂😂
0
0
4
@Mellophi
Ayon
1 year
@serious_mehta lol wtf man 😂
0
0
1
@VilsiJ
Vilsi Jain
1 year
0
0
2
@coffeedcognac
Sensei / Ōnyē n'kùzí
1 year
@serious_mehta That's savage man🤣
0
0
1
@RHotker
Rakesh Hotker
1 year
0
0
1
@nevrekaraishwa2
Aishwarya Nevrekar
1 year
0
0
2
@KaranSMittal
KaranShyam
1 year
0
0
2
@ephraimAdmassu
ephraim
1 year
@serious_mehta I will tell you one thing though: the math behind all this is high school level algebra at best.
0
0
0
@OverThinkinRunr
Rob Kehoe
1 year
@serious_mehta Besides accuracy of predictions in a test dataset how could a model’s “knowledge” be evaluated? Is accuracy even a good proxy for “knowledge”? In general, does model performance positively correlate to dataset size and/or model complexity (# of Params, layers, etc.)?
0
0
1