I use machine learning to boldly go where no human has gone before in experimental sciences. Opinions are my own, but please don’t hold them against me.
This week, the Nobel Committee set a terrible precedent by awarding a Nobel Prize to someone with a relatively low h-index. What kind of message does it send to young researchers?!
A grad student should automatically receive a Ph.D. degree if they publish N first-author papers (N varies by discipline, but generally N>=2). In today’s competitive job market, you shouldn’t waste time writing a PhD thesis no one will ever read.
What
#LK99
has demonstrated is that the scientific community is perfectly capable of doing a peer review via arXiv and social media tools, openly and transparently, further underscoring how irrelevant traditional journal publications have become.
I frequently come across bios, which are like, “I got my PhD from this fancy place,” “did my postdoc in that very famous lab,” “published many papers and won grants and awards,”… and zero mentions of what that person has really accomplished.
Students and postdocs should be allowed to submit and publish papers without their PIs. If your PI didn’t do any actual work (measurements, calculations, etc.), they shouldn’t be on the author list of your paper.
#AcademicTwitter
#AcademicChatter
Suppose I want to add my dog as a co-author on my paper to justify using 'we' in an otherwise solo-authored paper. What should I list as his affiliation?
I find science PR to be very nauseating. Every paper has to be presented as a paradigm shift that is going to enable revolution in materials, energy, blah blah blah. No one really believes in it, especially not papers authors.
@BenjaminDEKR
Perhaps, a dead end? Imagine coming to work on AGI to change the world and ending up working on a digital waifu and re-creating Google's demo from 7 years ago.
If your ML model is not grounded in physics, you'll be getting garbage outputs the moment you move away from the training data distribution. No amount of GPUs is going to help you.
Looking back at the papers I authored or co-authored over the past 10 years, I can identify 3-4 papers that I think ended up being *somewhat* useful. The rest was a tremendous waste of time.
LinkedIn: I’m happy to announce … blah-blah-blah… standard template.
Twitter: We will try reproducing the experiment in that superconductivity paper and will be live-tweeting our progress.
What if, and please hear me out, we cease the habit of labeling every research paper as revolutionary and acknowledge that science is inherently a methodical and gradual process, one that values incremental advancements? It's time to abandon sensationalism.
My frustration with a lot of research in condensed matter and materials is that it looks like nothing gets done beyond (often overhyped) publications. People chase one “sexy” topic after another, promising better or new types of devices, but it’s all BS (and they know it).
Materials science is a truly interdisciplinary field, where in order to be successful, you need to have a solid background in physics & chemistry, a hands-on experience with multiple characterization & simulation techniques, and a familiarity with AI/ML methods.
The rate at which my Google Scholar citations grow seems to be somewhat decelerating. Therefore, from now on, every time I review someone's paper, I'm going to ask them to cite a minimum of 10 (instead of usual 1-2) of my papers.
Do I understand correctly that once your citations stop growing (relative to previous years), you should quit doing science and start doing something else? At least, that’s my plan. Quit while you’re at or near the top of your game.
I am reviewing a paper and it is really good and there’s not much to comment on. I already read it twice and still can’t find anything to pick up on. Crazy…
Journals should advertise upcoming papers as movie trailers. “From the creators of XX method…”, “An unprecedented development in area YY…”, “This December… One of the most awaited articles in …”, “Only in [the name of a journal].”
Our work on the active learning of structure-property relationships in ferroelectric materials via the automated experiment in scanning probe microscopy was just published in Nature Machine Intelligence:
A fully Bayesian implementation of Heteroskedastic GP is now available in GPax:
🔍 Handle varying noise in input data
🛠 Dual GPs for signal & noise modeling
🧠 Custom kernels & priors for deep customization
Notebook with example ->
Most people go to conferences to i) make new connections, ii) drink with their buddies whom they haven’t seen in a while, and iii) do some sightseeing. Virtual conferences took all that away and forced us to listen to your boring science. Glad that in-person conferences are back.
If you spent time around people conducting sophisticated experiments that push the boundaries of knowledge, or those developing instrumentation for those experiments, you would realize that there is no way AI, in its current state, can replace them
It occurred to me that a lot of scientific research these days is actually engineering (e.g., finding optimal material systems for specific applications), which often operates under the guise of science to bypass rigorous scrutiny of its deliverables.
I think the way Elon Musk always takes credit for the work of talented engineers at Tesla/SpaceX is bad. Just because he funds these efforts and maybe outlines some general ideas doesn’t give him the right. I’m just glad we don’t have this problem in academia.
#AcademicTwitter
@SwipeWright
Yes, the "educated vs. non-educated" polls are implicitly rooted in the belief that college graduates can think for themselves and are less susceptible to propaganda, but this is may not be the case.
My advice to grad students considering doing a postdoc:
1) Don’t do it. Try finding a well-paying job in the industry instead.
2) If (1) isn’t an option due to e.g. visa restrictions, try to find a postdoc position in a national lab.
When my wife and I were dating, I liked to talk about my research, and she always pretended she was interested. But I didn’t realize that she was pretending at the time. I didn’t even know it for years later. But a few months ago, she told me this. And now I love her even more ❤️
Me: The scientific publishing is broken, journals impact factors are meaningless, there’s almost no correlation with the research quality, blah-blah-blah
Me after my paper gets accepted in a journal with IF>30: Well, maybe it’s not that bad. It looks like the system still works.
I feel really bad for young folks entering academic research these days as there’s so much emphasis on writing bullshit papers and so little on learning to build things that last.
Thrilled to share that our team at Oak Ridge National Lab & University of Tennessee has been honored with the R&D 100 award for the work on “Physics-Informed, Active Learning–Driven Autonomous Microscopy for Science Discovery!”
Much of today's research seems to be a treadmill: incessantly writing papers to obtain more funding just to write additional papers, all while yielding little practical value.
Speaking of “self-driving” labs. This is a typical setup for many surface science experiments (from my postdoc days). Good luck automating the whole thing. It is made for humans, not robots. But what you can do is use AI to automate data collection, analysis, & sample navigation.
I was tricked into studying solid-state physics by being told that I would be able to move individual atoms and use this power to build new materials and devices. What about you?
AI/ML in a nutshell: if you have a sufficient amount of data you can train an ML model to make reasonable predictions on new inputs that are similar to training examples for pretty much any domain. It will also fail on new data outside the training data distribution.
Kornia is an excellent library for (differentiable) computer vision that can work directly with Pytorch tensors. If you do deep learning in Pytorch I highly recommend using Kornia instead of torchvision.
We love to say that we need more physics in ML in part because it justifies our funding/existence. However, my somewhat subjective observation has been that, beyond simple toy problems, physics constraints tend to do more harm than good to ML efficiency.
True story: I was being ratioed into oblivion for telling the truth about middle-author publications. And when all hope seemed lost,
@timgill924
,the king of sociology, retweeted me. Soon,the twitter army of light came to save me. You can’t defeat us 'cos we have truth on our side
When one is very knowledgeable in their field, they tend to quickly see everything that could go wrong and become resistant to new, high-risk, high-reward ideas. That’s precisely why we need generalists alongside specialists: to balance caution with creativity.
If you work on ML for science, trying to compete directly with places like DeepMind is silly. Better identify areas where DeepMind can’t shine (yet) because they don’t have access to the necessary equipment and expertise, such as advanced experimentation w/ neutrons,photons,nano
In the old days, if an experimentalist had this type of question, they would need to arrange a meeting with folks from cs/math department. Now you have a digital assistant that instantly answers the question and provides a code example from an open-source library
I was asked what’s wrong with the “the more, the merrier” approach to the authorship of the scientific publications. The answer is very simple: it allows smooth talkers who are good at politics to take credit (partially or even fully) away from people who do the actual work.
@Andrew_Akbashev
I’m not a fan of h-index-based evaluation, but Andre Geim had a low h-index because he was working in post-Soviet Russia. Once he moved to the West, his and Novoselov’s h-index skyrocketed pretty quickly, which correlates with the impact of their work.
This week I've got four papers accepted in “high-impact” journals, so I again think that publication system works and that publications are a good metric of scientific progress. Have a lovely weekend.
One often overlooked successful practical application of deep learning is the Grammarly app. It does exactly what AI is supposed to do - increase our productivity without replacing us. That’s how I also see the realistic applications of AI in science for the near future.
Made it to the front page of C&EN as the 'quote of the week.’ 😎There’s also a story featuring our AtomAI software alongside other amazing software that accelerates scientific research. Check it out here:
Tired of applying AI/ML to computational databases with no connection to the real world? Want to use your models to autonomously operate state-of-the-art scientific equipment and advance science for humanity? We are hiring! 🚀
Consider applying for the 'Machine Learning for…
I occasionally review manuscripts + code repos for the Journal of Open Source Software, and out of all the journals I have reviewed for, this one stands out for having the most meaningful and transparent review process.
@paulg
People drastically underestimate how much of what they do is just a mere interpolation between “training examples.” All these will be replaced by AI, probably in 5-10 years, freeing time for us to focus on 1-2% of “edge cases” that require true creativity.
Our new work on deep learning-enabled atomic engineering in automated scanning transmission electron microscopy. Finally, a promise of deep learning for STEM has been fulfilled. By
@KevinRoccaprio1
et al.
Forgive me for asking, but how has the experimental discovery/confirmation of Higgs boson in 2012 improved our lives or opened doors for new technology?