@serious_mehta
If we increase the parameters then GPT-7 might have more knowledge than all humans combined.
Scientists won't have to reserch anything, they can just ask GPT 7 😂
@serious_mehta
Besides accuracy of predictions in a test dataset how could a model’s “knowledge” be evaluated? Is accuracy even a good proxy for “knowledge”?
In general, does model performance positively correlate to dataset size and/or model complexity (# of Params, layers, etc.)?