Distinguished Prof of ECE and Feedzai Prof of machine learning
@istecnico
,
@itnewspt
Professor, photographer, music lover, curious about almost everything.
The Cauchy-Schwarz inequality is the cornerstone of so many results and proofs. I learned this beautifully concise proof in a nice little paper by Jean-Baptiste Hiriart-Urruty, "Des démonstrations qui font boum!" ("Proofs that make boom!").
@EricTopol
There was almost no politicization of vaccination; all politicians (both in power and opposition) supported the vaccination process. Also, although there are a few crazy anti-vaxers, it is a really very very small number.
Insightful and clear explanation of why DALLE uses diffusion models rather than GANs, with a great take-home message: "All the hyper-parameter tuning in the world can't beat a few lines of thoughtful math."
Thanks, Tom.
Why have diffusion models displaced GANs so quickly? Consider the tale of the (very strange) first DALLE model. In 2021, diffusions were almost unheard of, yet the creators of DALLE had already rejected the GAN approach. Here’s why. 🧵
Oldies but goldies: M. Figueiredo, R. Nowak, An EM algorithm for wavelet-based image restoration, 2003. Introduces Iterative Soft Thresholding to solve the lasso.
Congratulations to my friend, colleague, and former student
@andre_t_martins
for his brilliant habilitation defense.
@istecnico
and
@itnewspt
must be extremely proud of having such a world-class researcher.
Oldies but goldies:
@mariotelfig
, R. Nowak, An EM algorithm for wavelet-based image restoration, 2003. The first appearance of the celebrated Iterative Soft Thresholding Algorithm to solve the lasso.
Many congratulations to
@istecnico
professors Anabela Cruzeiro and Mario Figueiredo
@mariotelfig
on their election to the Academy of Sciences of Lisbon - Mathematics Section of the Class of Sciences
@acadcienciaslx
If you haven't seen it, stop what you're doing and watch Michael Bronstein's (
@mmbronstein
) invited talk at ICLR. What a stunning, insightful, and inspiring presentation!
Excellent opinion article by Weinan E; recommended!
"With machine learning coming into the picture, all major components of applied math are now in place. This means that applied math is finally ready to become a mature scientific discipline".
Woodbury formula allows one to compute in two different ways the solution of Ridge regression. It allows one to even use infinite-dimensional feature spaces (p=inf) via kernelization.
Just out! Survey on conformal prediction for NLP, on which I had the pleasure to collaborate with the students Margarida Campos and
@tozefarinhas
and my colleagues
@chryssaZrv
and
@andre_t_martins
.
Attending my first live academic defense since the beginning of the pandemic. And it couldn't be a better one: my son Guilherme is defending his Master thesis in anthropology.
So,
@insect_microbe
and I wrote a little something on animal-microbe symbioses, and how the microbes within interact! My first-first-author and his first-last-author, super happy to see it out 🥳
It's a bit of a mouthful, but here goes the title of my just-submitted PhD thesis! A massive thanks to the best supervisors one could ask for
@RolfsMicrobes
&
@WagnerEvolution
Scientists of Twitter! The world needs more reminders that we're living, breathing human beings with lives outside a lab. Quote tweet this with a picture of you doing not-science!
No water, no electricity, all home sweet home. I miss 137...
Here's my conversation with Manolis Kellis (
@manoliskellis
), head of the MIT Computational Biology Group, about the beauty of the human genome, evolutionary dynamics, mutation, inheritance, viruses, language, free will, and life.
Many decades ago, Markov and Shannon laid the conceptual and mathematical/statistical foundations of language models, starting a research direction that eventually led to the modern large language models.
"We [humans] are not the inventors of the first digital computer - we're its descendants."
MIT computational biologist Manolis Kellis on digital inheritance and the human genome:
@lexfridman
#genomics
@mraginsky
I remember once hearing (or reading) some optimization person saying something in the same spirit: dividing functions into convex and non-convex is like dividing all animals into elephants and non-elephants.
A figure from our 2007 paper (with
@rdnowak
and
@madsjw
). After using l1 regularization, debiasing makes a huge difference!
Thanks
@sirbayes
for bringing back this memory from 16 years ago.
@aaron_defazio
Yes, I mention this in sec 11.4.3 of my book (). It is called "debiasing" and does indeed improve prediction error a lot - see example below (compare L1 in row 2 to MLE on the support of L1 solution in row 3).
Register for the 20th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems to be held
@istecnico
22nd – 26th July.
📅 Short paper submission: 23rd April
📅 Early bird registration: 7th May
more info
Next week, we have the great pleasure to have
@EmtiyazKhan
presenting "The Bayesian Learning Rule for Adaptive AI" at the Math, Physics & Machine Learning webinar series at IST. Don't miss it (notice the unusual time).
@istecnico
@LUMLIS1
@docmilanfar
Nowadays they are learned by CNNs but not called by their name. I guess people under a certain age have not seen the impact they had 2 or 3 decades ago.
21 years ago, at the NATO-ASI on Learning Theory and Practice, hosted by Johan Suykens (Leuven). Many famous names of "old ML": Bartlett, Bishop, Cristianini, Devroye, Gyorfi, Poggio, Schölkopf, Singer, Smale (Fields medalist), Vapnik, and many others.
A great couple of weeks!
@beenwrekt
Killer app? I don't know; it looks more like an interesting demo. The killer app for AI is (and has been for way more than a decade) social media and surveillance capitalism. Just follow the money and see who invested massively in developing AI.
My friend and colleague
@arlindooliveira
and I co-authored the first chapter of a recent book on AI and the law, with a brief overview of the history of AI and ML and some current highlights.
Just published, the chapter "Artificial Intelligence: Historical Context and State of the Art", part of a recent Springer book on AI and Law,
#IDSSMLKD
#LUMLIS
Just finished reading Rutger Bregman's book "Humankind". I highly recommend it. It made me see humanity in a more positive and hopeful light (and it is a fascinating read).
New paper to be presented at CLeaR 2023, on telling cause from effect (one of the core problems in causal discovery) on categorical data, introducing the "uniform channel model".
@pmddomingos
Or as Baron Schwartz once wrote: "When you’re fundraising, it’s AI. When you’re hiring, it’s ML. When you’re implementing, it’s linear regression.”
@LConraria
Apontar para o facto de haver muitos vacinados doentes, ignorando a elevada taxa de vacinados, é um exemplo claro da base-rate fallacy, como escrevi há dois dias neste artigo:
Starting soon!
@gabrielpeyre
is giving a webinar on "Scaling Optimal Transport for High dimensional Learning", starting at 7:00pm CET today. Not to be missed. More information here:
Mário Figueiredo (Técnico /
@itnewspt
) highlights songs “whose combination of music and poetry touches [him] particularly”. “Music can have a profound, even visceral, influence on emotions.”
🎶
#T
écnicoPlayLists
#T
écnicoScientists (8/x)
Not to be missed.
@CevherLIONS
seminar at the Mathematics, Physics, and Machine Learning series, at
@istecnico
, next Thursday, September 30, at 6:pm CET: "Optimization Challenges in Adversarial Machine Learning".
@LUMLIS1
RIP!
Jacob Ziv and Abraham Lempel, of Lempel-Ziv fame, recently passed away, less than two months apart. Two highly influential giants of information and coding theory.
In Memoriam: Jacob Ziv
Abraham Lempel
Afonso Teodoro (PhD student that I supervised in collaboration with my colleague José Bioucas-Dias) receiving the IBM Scientific Prize for his PhD work.
@NandoDF
I have mixed feelings about this,
@NandoDF
. If you're GH, famous Turing award winner, it's perfectly Ok to admit fatal flaws in some of his recent ideas and cancel talks. Could a young not-yet-tenured researcher/academic afford/dare to do something like that?
Big experimental evolution by
@alex_fig
with
@WagnerEvolution
We have worked on cheating for many years. This is now coming to an end, and this manuscript is the ultimate roundup.
Next Friday, at 3 pm (CET), Mathieu Blondel (
@mblondel_ml
) will tell us about his most recent work on efficient and modular implicit differentiation. Don't miss it.
@LUMLIS1
I just finished reading this very nice little book by Stephen Stiegler. Very interesting view of the deep roots (and their historical origins) of the core ideas of statistics.
In 2 hours (12:30 Lisbon time) I will give a talk with an overview of my late colleague and friend José Bioucas-Dias. You're all welcome to attend (Zoom).
Loved the episode of
@seanmcarroll
's great podcast (Mindscape) with
@wyntonmarsalis
.
WM tells this story of Dizzy Gillespie asking Louis Armstrong “Pops, why you always looking up high? What are you looking for?” Pops told him, “I don’t know brother Diz, but I always find it.”
Reading
@erikphoel
's book, The World Behind the World: Consciousness, Free Will, and the Limits of Consciousness", with the help of another consciousness.
Something is wrong in today's academic AI/ML.
"LeCun, who also has a professorship at NYU, agreed times have changed, noting that as professors, "we would not admit ourselves in our own PhD programs.""
@pmddomingos
Physics gave us transistors, from which all modern electronics descends, and optical fibers, the backbone of all modern communications. Even considering only the post WW2 contributions, the ROI of physics is huge!
Congratulations to
@CevherLIONS
, who has been named an
@IEEEorg
Fellow - the organization's highest honor - for his contributions to model-based signal processing and semi-definite programming!
@mjoanasa
@istecnico
Joana, muito interessante e relevante discussão. Muitas pessoas no IST dão um peso exagerado à avaliação e a uma suposta "exigência". Devemos ser mais regador e menos funil. Isto lembrou-me um texto muito interessante do David MacKay:
Next Thursday, we very happy to host
@tomgoldsteincs
talking about "Building (and breaking) neural networks that think fast and slow", at the MPML seminar series,
@istecnico
.
More details (including Zoom link) at
@LUMLIS1
Just finished reading Andreas Wagner's most recent book. It's hard to summarize, but it's about how essential diversity is, from physics, biochemistry, biology to human creativity. Highly recommend.
@michael_nielsen
I guess a LaTeX-generated picture is cheating, but I like this one-line proof of Cauchy-Schwarz (which I learned from Jean-Baptiste Hiriart-Urruty):
I'm delighted to announce that I have been awarded an
@ERC_Research
Consolidator Grant for the project DECOLLAGE (DEep COgnition Learning for LAnguage GEneration). I'm now looking for highly motivated PhD students and post-docs to join me in Lisbon to work on this project :)
@docmilanfar
@ddonoho11
It is a beautiful paper. I recommend it many times.
I also love this one, by David Mumford: "The Dawning of the Age of Stochasticity".
Excellent video about exponential growth for those who are not very familiar with the the corresponding math. For my more tech/math knowledgeable friends, I also recommend it as excellent from a pedagogical perspective.
With recorded COVID-19 cases (outside china) so eerily matching an exponential, I couldn't resist making a primer on exponential/logistic growth. At least 3 counterintuitive things about this kind of growth seem worth putting into the discussion.