> Our understanding is that only authors of papers appearing on arXiv can submit withdrawal requests. We have directed the author to submit such a request, but to date, the author has not done so.
Between this and the subtle reference to “former second-year PhD student” it makes sense that they’d have to make a public statement.
They do a good job of toeing the required line of privacy while also giving enough information to see what’s going on.
I wonder if the author thought they could leave the paper up and ride it into a new position while telling a story about voluntarily choosing to leave MIT. They probably didn’t expect MIT to make a public statement about the paper and turn it into a far bigger news story than it would have been if the author quietly retracted it.
That's not how it works in the real world. That would be a fraudulent request and I suspect they'd invite legal trouble by impersonating someone else to access a computer system.
Furthermore, if the author could demonstrate to arXiv that the request was fraudulent, the paper would be reinstated. The narrative would also switch to people being angry at MIT for impersonating a student to do something.
I've done it for people who used my email to sign up for Facebook and Instagram. Presumably now they have a more rigorous verification flow but they used to let people use any email without checking. I can't have a potential criminal using a social account connected to me, so password reset and disable the account is the only rational solution. Obviously this is slightly more problematic for an institution.
It's infuriating that Instagram, Facebook, etc send a "Email Verification" that has NO option to say "Nope, not me, don't want it, don't do it".
Worse, I'd like to create my own Instagram now, but cannot, because somebody else tried to use my email a decade ago and now all I get is a very very confused loop.
I'm sure these systems make sense to somebody, there's detail and nuance and practicality I'm horribly ignorant of, but they just seem insanely unprofessional to me as an outsider :-/
> Worse, I'd like to create my own Instagram now, but cannot, because somebody else tried to use my email a decade ago and now all I get is a very very confused loop.
Why not use a different email address? Nothing about that would make it less your “own” Instagram account.
>That's not how it works in the real world. That would be a fraudulent request and I suspect they'd invite legal trouble by impersonating someone else to access a computer system.
Emails are not people. You can impersonate a person, but you can't impersonate an email. If I own a company and I issue the email [email protected] but then have to fire him... using this email address to transfer company assets back to someone who can be responsible for them isn't fraud (for that purpose, at least). How is this not the same issue?
This would be a coherent argument if the paper was submitted by an email address. Instead the paper was submitted by a person. The email address serves to identify the person. Only the person can redact the paper.
If you misrepresent that you are dick.less then yes that would be fraud. They say only the authors can submit withdrawal requests, so you would have to present yourself as the author even though you aren't. That's fraud.
Although not explicitly stated, i read previous comments as using [email protected] to cancel his personal Netflix account. (Let's say that privateequity.com allowed personal usage of company email.)
I see a difference between accessing an email account and impersonating the previous account holder.
MIT is hiding its own culpability by throwing the Student under the proverbial bus. Acemoglu and Autor who are notorious attention seekers and very media savvy and wealthy profs had vouched for him. There is no way a 2nd year PhD students could have pulled this off on his own without a trace of his whereabouts and contacts in the industry.
A cursory review of the first paragraph of the abstract of his single author paper should've set off alarms:
"AI-assisted researchers discover 44% more materials, resulting in a 39% increase in patent filings and a 17% rise in downstream product innovation".
Anyone with rudimentary familiarity with industrial materials science research would have suspected those double digit numbers - even single digit improvements are extremely rare.
Apparently, he also attempted to create a fake website to try to cover his tracks, registering the domain on Jan 12 2025, potentially to try to show that Corning was the company he worked with. This drew a WIPO complaint whereby Corning compelled transfer of the domain name:
>It looks eerily similar to the distribution in this [pharma] preprint. This distribution might make sense for drugs, but makes very little intuitive sense for a broad range of materials, with the figure of merit derived directly from the atomic positions in the crystal structure. This is the kind of mistake that someone with no domain expertise in materials science might make.
In retrospect, there seems to be a tell that when he's lying he won't look at the screen/camera: his eyes go up, left, right, anywhere but forward. What I find scary is that this practice of extemporaneous fabrication may be a well-ingrained habit at this point that isn't limited to the scientific realm of the author's life.
1. The data in most of the plots (see the appendix) look fake. Real life data does not look that clean.
2. In May of 2022, 6 months before chatGPT put genAI in the spotlight, how does a second-year PhD student manage to convince a large materials lab firm to conduct an experiment with over 1,000 of its employees? What was the model used? It only says GANs+diffusion. Most of the technical details are just high-level general explanations of what these concepts are, nothing specific.
"Following a short pilot program, the lab began a large-scale rollout of the model in May of 2022." Anyone who has worked at a large company knows -- this just does not happen.
On point 2, the study being apparently impossible to conduct as described was also a problem for Michael LaCour. Seems like an underappreciated fraud-detection heuristic.
> As we examined the study’s data in planning our own studies, two features surprised us: voters’ survey responses exhibit much higher test-retest reliabilities than we have observed in any other panel survey data, and the response and reinterview rates of the panel survey were significantly higher than we expected.
> The firm also denied having the capabilities to perform many aspects of the recruitment procedures described in LaCour and Green (2014).
Wayback of the Sloan School seminar page shows him doing one on February 24, 2025. I wonder how that went.
I miss google search's Cache. As with the seminar, several other hits on MIT pages have been removed. I'm reminded of a PBS News Hour story, on free fusion energy from water in your basement (yes, really), which was memory holed shortly after. The next-ish night they seemed rather put out, protesting they had verified the story... with "a scientist".
That cassyni talk link... I've seen a lot of MIT talks (a favorite mind candy), and though Sloan talks were underrepresented, that looked... more than a little odd. MIT Q&A norms are diverse, from the subtle question you won't appreciate if you haven't already spotted the fatal flaw, to bluntness leaving the speaker in tears. I wonder if there's a seminar tape.
Oh interesting. I haven't talked to any recent graduates but I would expect an MIT PhD student to be more articulate and not say "like" every other word.
There was a question at the end that made him a little uncomfortable:
[1:00:20]
Q: Did you use academic labs only or did you use private labs?
A: (uncomfortable pause) Oh private, yeah, so like all corporate, yeah...
Q: So, no academic labs?
A: I think it's a good question (scratches head uncomfortably, seemingly trying to hide), what this would look like in an academic setting, cause like, ... the goals are driven by what product we're going make ... academia is all, like "we're looking around trying to create cool stuff"...
My 8 year-old is more articulated than this person. Perhaps they are just nervous, I'll give them that I guess.
Don't confuse a polished TED talk from a practiced speaker with a seminar from any random person in academia. I'm sad to admit I've given talks (many recorded) that are much worse than this.
These seminar-style talks in particular have a strong Goodheart bias: academic scientists judge each other on the papers they write, but the highest honors usually come in the form of invited talks. The result is that everyone is scrambling to have their students give talks.
In larger scientific collaborations it can get a bit perverse: you want to get everyone together for discussions, but the institutes will only pay for travel if you give their students a 20 minute talk. You'll often have conferences where everyone crams into a room and listens to back-to-back 20 minute lectures for a week straight (sometimes split into multiple "parallel sessions"), and the only real questions are from a few senior people.
It's a net positive, of course: there's still some time around the lectures and even in 2025 there's no good replacement for face-to-face interaction. But I often wish more people could just go to conferences to discuss their work rather than "giving a talk" on it.
> But I often wish more people could just go to conferences to discuss their work rather than "giving a talk" on it.
Very true point. I've been wondering why academics "suffer" so much from a system that they themselves created and are actively running (unpaid, all volunteers-based). Conferences are organized by academics for academics. Grants are subitted by and evalated by academics. Journals' editorial boards are staffed by academics to review the work of their peers.
Even metrics of merit are defined by scholars of scientrometrics, also academics, to rate the works of their peers. Yet we have a system where peer review has a high element of arbitrariness, lacks minimal professional standards, conferences organizers take too much of their valuable time to do a job (rotational, even!) that is often mediocre, and authors donate their papers for free to commercial publishers from which their institutions then buy these same papers back for a lot of money.
After a quick analysis of these entrenched systems in my first months of doctoral studies, I questioned the intelligence of people who first created such a system and then keep complaining about it, yet they make no move to change anything.
Let's invent a new meeting format where people basically travel to a nice place with few distractions in order to discuss their research informally, no talks.
In my field (computer science), it's what workshops once were before they became mini-conferences with three minutes question time (for all the questions from the whole audience after one talk, not per person asking) after talks.
PS [edit]: I once saw two older professors discussing something on the corridor floor of a conference while talks were going on inside the various rooms. They were sitting on the floor, both held pens and there were some loose papers scattered on the floor. This was right were people coming out of talks would have had to walk over them. I had skipped that session, so I asked them what they were doing. They said "Oh, we're writing a paper. We only meet twice a yeara some conference, that's when we need to get most of our important work done." At the time I found it funny, but with the benefit of hindsight isn't it a sad state of affairs?
> Don't confuse a polished TED talk from a practiced speaker with a seminar from any random person in academia.
I don’t expect a TED talk but we’re still talking about MIT here. I’ve seen 8 year olds more articulated. I guess where I am from being called in front of the class and having to present or talk about the homework reading is common, so perhaps why it’s seen as exotic in US to be able tie words together without saying “like” after every other word, or slump and touch the hair every 10 seconds.
Yeah, MIT-affiliation certainly doesn't imply good presentation skills, I agree with you there.
I wonder about this too, but I think any academic institute asks a lot of PhD students. 90% of it isn't about giving a good public talk. Especially at PhD level it's much more about actually gathering a blob of data, distilling it into a (still nonlinear) structure, and then, finally, serializing it into a paper draft. In many cases the talk is something you do at the end as a formality.
This doesn't get any simpler just because you're at an institute with a fancy name. Your hypothetical 8 year old has one chance to get a cookie and had better be pretty articulate about it. This MIT-branded academic has a million other things going for them and can afford to slack off a bit on the presentation skills.
> Your hypothetical 8 year old has one chance to get a cookie and had better be pretty articulate about it. This MIT-branded academic has a million other things going for them and can afford to slack off a bit on the presentation skills.
Nah, they also can explain how potential and kinetic energy works, talk about how many types of stars are out there and so on. Not hypothetical at all. They do like cookies, too!
> This MIT-branded academic has a million other things going for them and can afford to slack off a bit on the presentation skills.
Well, I posit in this case their 1st out of 1 million other worries was to sound credible, because they may be asked about their methodology. Staying consistent while making things up does take considerable amount of effort and the speech will suffer. Listen to the segment I point out and see how they act. They sort of pretended they didn't hear the question at first.
Using the word "like" is not as bad as it seems, and it's been quite common in language for longer than we think (though usage does seem to increase with each generation).
There was a recent podcast that covered it with some experts that's a great listen:
> Using the word "like" is not as bad as it seems, and it's been quite common in language for longer than we think
The word has many, many uses: filler/pause, oral punctuation, discourse marker, hedging, qualifier. It also serves an important social function, in that it can reduce perceived severity or seriousness. Young women seem to use it assure peers that they are sweet and not threatening.
I hate it. It's not uncommon to hear it more than four or five times in a single sentence.
The implied expectations are odious: eloquence is a faux pas; directness is rude; a fifth-grade vocabulary is welcoming.
See I'm thirsty, but I can drink the water later. And my grant proposal is due at midnight tonight (12:01 on Sunday, technically), and my PhD student is texting me to say that he can't log into the cluster, and we also just got the proofs back from that paper but I guess that can wait. At some point I should fill out that reimbursement form for the conference last week but first I should get back to those undergrads who said they wanted a summer ... wait water? Oh yes sure water would be great.
Oh, he also claimed that he got IRB approval from "MIT’s Committee on the Use of Humans as Experimental Subjects under ID E-5842. JEL Codes: O31, O32, O33, J24, L65." before conducting this research, i.e., at a time when he wasn't even a PhD student.
If a paper is difficult to replicate in a high volume field.. will it ever be replicated? The question we should be asking is how many fraudulent papers are there in the field?
I’ve even worked in places where some ML researchers seemingly made up numbers for years on end.
I agree with point 1, at least superficially. But re: point 2, there are a lot of companies with close connections to MIT (and other big institutions like Stanford) that are interested in deploying cutting edge research experiments, especially if they already have established ties with the lab/PI
A month by month record of scientists time spend on different tasks is on its face absurd. The proposed methodology, automatic textual analysis of scientists written records, giving you a year worth of a near constant time split pre AI is totally unbelievable.
The data quality for that would need to be unimaginably high.
% gunzip -c arXiv-2412.17866v1.tar.gz | tar xOf - main.tex | grep '\bI have\b'
To summarize, I have established three facts. First, AI substantially increases the average rate of materials discovery. Second, it disproportionately benefits researchers with high initial productivity. Third, this heterogeneity is driven almost entirely by differences in judgment. To understand the mechanisms behind these results, I investigate the dynamics of human-AI collaboration in science.
\item Compared to other methods I have used, the AI tool generates potential materials that are more likely to possess desirable properties.
\item The AI tool generates potential materials with physical structures that are more distinct than those produced by other methods I have used.
% gunzip -c arXiv-2412.17866v1.tar.gz | tar xOf - main.tex | grep '\b I \b' | wc
25 1858 12791
%
Maybe the point is that it is rare for a paper to have the pronoun "I" so many times. Usually the pronoun "we" is used even when there is a single author.
Agreed! It’s pretty alien. I’ve seen brilliant single author work, but nothing that uses “I” unless it’s a blog post. The formal papers are always the singular “we”. Feels very communal that way!
Nice to include the giants we stand on as implied coauthors.
Not being an academic, my (silent) reaction to singular "we" in academic writing is usually, "We? Do you have a mouse in your pocket? Or do you think you're royalty?" It's nice to hear of your more charitable interpretation.
There are, notably, two different if frequently confused “academic we” conventions, distinguished by their clusivity[1]: the inclusive “academic we” in constructions such as “thus we see that ...” refers to the author(s) and the reader (or the lecturer and the listener) collectively and is completely reasonable; the exclusive “academic we” referring only to the single author themselves, is indeed a somewhat stupid version of the “royal we” and is prohibited by some journals (though also required by others).
Yeah, it's the exclusive version that bugs me: "We tested the samples to failure on an INTRON tester under quasistatic conditions." It's nice to hear some journals prohibit it.
It's rare that "I" is used because usually papers have multiple authors, and also the academic community has a weird collective delusion that you have to use "we"... but there are still a reasonable number of papers that use "I".
There's no "collective delusion" here. There is a long-established tradition that formal scientific writing should avoid use of first-person pronouns in general because it makes findings sound more subjective. It's taught this way from early on.
This is slowly starting to change, but it's still pretty much the rule.
For a while passive voice was recommended by lots of courses and some advisors, but I reality most journals never recommended passive voice and now many (most) actively discourage it (e.g. here is the nature style guide https://www.nature.com/nature-portfolio/for-authors/write) , because it makes texts much more difficult to understand. It is quite funny how passive voice became prevalent, it was not common in the beginning of the 20th century but somehow become quite common especially in engineering. It is only quite recently (~10 years) that the move is to back to active voice.
> There is a long-established tradition that formal scientific writing should avoid use of first-person pronouns in general because it makes findings sound more subjective. It's taught this way from early on.
Established tradition doesn't negate "collective delusion".
And anyone who uses the use of "I" in a paper to imply anything about its authenticity is definitely indulging in some form of a delusion. It's not the norm, but is definitely permitted in most technical fields. When I was in academia no reputable journal editor would take seriously reviewer feedback that complains about the use of I.
> The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor. The two said they were approached in January by a computer scientist with experience in materials science who questioned how the technology worked, and how a lab that he wasn’t aware of had experienced gains in innovation. Unable to resolve those concerns, they brought it to the attention of MIT, which began conducting a review.
So the PhD student might have been kicked out. But what about the people who "championed it". If they worked with the student, surely they might have figured out the mythical lab full of 1000s material scientists might not exist, it might exist but they never actually used any AI tool.
Apparently, none of the 21 people mentioned in the acknowledgments questioned the source of the dataset. One of them also wrote a quite popular Twitter thread about the research. When notified of the recent events, he curtly replied that "It indeed seems like the data used in the paper is unreliable." [no need to mention them by name, I think]
This happens again and again in research - I'm just reminded of the stem cell scandal around Obakata. Before the fraud was uncovered: dozens of senior researchers supporting the research, using the glory for their own gains. After the fraud is exposed: nobody wants to have been involved, it was only that one junior person.
It doesn't excuse the fraud of the junior person but it makes you think how many senior-level people out there are riding similar fraudulent waves, doing zero checks on their own junior people's work.
Most of the processes surrounding and supporting science are not robust against a dedicated adversary seeking to exploit the system. This is nothing new - Newton ordered and then wrote the anonymous report commissioned by (iirc) the Royal Society to decide who invented calculus, him or Leibniz.
Basically, science is quite vulnerable to malicious exploiters. Part of this is because society isn't funding science anywhere near sufficiently to do a priori in-depth checks. You claim you got data on hundreds of measurable thingies in a certain way (from surveying people to scanning the web to whatever)? If it's not blatantly obviously a lie, it'll probably be accepted. Which is inevitable: at one point, you're going to have to accept the data as genuine. If there's no obvious red flags, you'd only waste time on further checking data - you'd need to do a real deep dive (expensive time-wise) to come up with circumstantial evidence that may still be explainable in a benign manner. For scientists, it is almost always more profitable to spend such time investments on furthering their own scientific efforts.
So yes, there are various ways in which someone willing, dedicated and sufficiently skilled can "Nigerian-Prince" the scientific process. Thankfully, the skill to do so typically requires intimate knowledge of the scientific process and how to conduct research -- this cheating is not easily accessible to outside bullshitters (yet).
Impressively the paper seems to have been cited 50 times already. I don't mind much if its taken down or not, but with the old guard publishers you can at least get a redaction notice or comment about the issues with a paper embedded in the publication. If you find this paper cited somewhere and follow it to the source at arxiv, you will never be made aware of the disputes surrounding the research. Preprint servers has somewhat of a weakness here.
Most of the 50 citations are preprint servers (like arXiv) or aggregators (like researchgate). It would be nice to count the number of citations in research papers in peer review journals.
> you will never be made aware of the disputes surrounding the research
ArXiv is not peer review, so it's as confiable as Wordpress or Medium or Blogspot or X/Tweeter or ... The main difference is that the post is in PDF instead of HTML. There is an invitation system to avoid very stupid cases, but it's very week.
I remember a weird cryptography "breakthrough", and they published it in arXiv, and the first 5 pages were an explanation of the rule of 9 for divisibility and then a dubious fast factorization algorithm. [The preprint was linked in https://news.ycombinator.com/item?id=20666501 and I replied to it (s/module/modulo/g)]
A weakness that goes hand-in-hand with the lack of peer review. There's moderation but shouldn't be considered equivalent to it. Trusting the study means trusting the author or reviewing the paper yourself. If a withdraw happens, either the author comments on why they did it[0] or, similarly to previous, you've to search it yourself.
[0] E.g. arxiv/0812.0848: "This paper has been withdrawn by the author due to a crucial definition error of Triebel space".
> A weakness that goes hand-in-hand with the lack of peer review
Peer review is not well equipped to catch fraud and deliberate deception. A half-competent fraud will result in data that looks reasonable at first glance, and peer reviewers aren't in the business of trying to replicate studies and results.
Instead, peer review is better at catching papers that either have internal quality problems (e.g. proofs or arguments that don't prove what they claim to prove) or are missing links to a crucial part of the literature (e.g. claiming an already-known result as novel). Here, the value of peer review is more ambiguous. It certainly improves the quality of the paper, but it also delays its publication by a few months.
The machine learning literature gets around this by having almost everything available in preprint with peer-reviewed conferences acting as post-facto gatekeepers, but that just reintroduces the problem of non-peer-reviewed research being seen and cited.
In my opinion the paper shouldn’t be take down. Instead a note should be added noting the concerns with the pre-print and that’s it’s likely fraudulent.
Edit: Since the paper has been cited, others may still need to reference the paper to determine if it materially affects a paper citing it. If the paper is removed it’s just a void.
That's what happens when a paper is withdrawn [1], and MIT requested to withdraw the paper [2]. This news title saying that they requested to take down the paper is subtly incorrect.
I agree, the offense should have a public trail. But there should be safeguards to prevent any citing of fraudulent paper, not allowing for bits and pieces to outlive the offense. Citing papers should be marked with a warning until resolved by their authors.
> The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor. The two said they were approached in January by a computer scientist with experience in materials science who questioned how the technology worked, and how a lab that he wasn’t aware of had experienced gains in innovation. Unable to resolve those concerns, they brought it to the attention of MIT, which began conducting a review.
If that’s the distinction, it would have been helpful for the original comment to note that they were just sharing some silly trivia instead of making a point.
It's not a silly piece of trivia, it's a completely different thing than what people think of as the "Nobel Prize", which is the set of prizes established by Nobel's will, not an unrelated prize named after him to leech off the prestige associated with his name.
The reason people correctly view this as silly trivia is that it's hardly an "unrelated prize." The Nobel Foundation administers the Economics prize in the same manner as all the others, and the awards are given at the same ceremony. You are making it sound like it's entirely separate when it's not. I don't think the Nobel Foundation was trying to "leech off the prestige associated with his name."
AFAICT your take exists entirely to delegitimize economics as a science. Very childish and frustrating.
>> It's not a silly piece of trivia, it's a completely different thing than what people think of as the "Nobel Prize", which is the set of prizes established by Nobel's will, not an unrelated prize named after him to leech off the prestige associated with his name.
> AFAICT your take exists entirely to delegitimize economics as a science. Very childish and frustrating.
You know, real sciences don't need shiny medallions to make them legitimate. I'd say your comment delegitimizes economics more than the GP's.
The price was created, and is given, by the Nobel Foundation, which was set up by Nobel's will to carry out his last wish. If you go to the official page of the Nobel Prize the Prize in Economic Sciences is listed with the other Nobel Prizes. Its not one of the original Nobel Prizes, but claiming its a completely different thing is not true.
The way they presented the information, it is a silly bit of trivia. If there wanted to make some sort of argument about prestige or whatever, they could have made it. Dropping hints of some niche rabbit hole issue is not making a good-faith argument.
This is inaccurate pedantry. It is commonly referred to as the nobel prize in economics and administered by the same foundation, the funding for it is a gift to the foundation from the Swedish central bank instead of being sourced from Nobel's estate.
yeah, but also "Nobel accuses the awarding institution of misusing his family's name, and states that no member of the Nobel family has ever had the intention of establishing a prize in economics." It's hijacking of the brand.
That ship has already sailed… and circumnavigated the globe several times. It’s weird anyone feels obligated to bring this stuff up since everybody familiar with the prize knows the deal.
> Nobel accuses the awarding institution of misusing his family's name
From Alfred Nobel’s great grandnephew (I’m not even sure what that looks like on a family tree), to spare anyone else looking it up.
It's pendantry, he won a prize and the great grand nephew says they shouldn't call it a Nobel prize. It's a waste of time to discuss what the prize should be called rather whether the award is worthy of being the best economics research/breakthrough that year. I don't know the answer to that but I don't really care about the nomenclature
> of being the best economics research/breakthrough that year.
So the idea that it should be a "peace prize" or contribute to the world as a whole is entirely lost in this definition. Which is why I find the Sveriges Riksbank memorial prize so unctuous.
The grandson of Alfred Nobel's older brother complained publicly 20 years ago... about a prize that's been given now for nearly 60 years.
Yawn.
Distant relation of man who used his fortune making explosives to give a prize to prominent academic unhappy, complains. The foundation got to make the decision, was given the name. This is "old man yells at cloud" level of discourse. This distant relation has less of a right to say how the name gets to be used than the foundation created by the man.
I always wonder what happens with these high-profile transgressors. I once created a Google News alert for a high-level Apple employee who went to jail for some criminal act at Apple and never saw any indication of him again. I’m guessing his career in economics is likely over (he’d previously worked at the NY Fed before starting at MIT) and I wonder what he’ll end up doing—will he be able to find some sort of white-color work in the future or will he be condemned to retail or food-service employment.
The MIT announcement says they asked him to retract the paper but he wouldn't, which led to them making the public statement about the paper.
They may have thought they could jump into an industry job, including the paper and all of its good press coverage on their resume. Only the author can retract an arXiv paper, not their academic institution. It wouldn't be hard to come up with a story that they decided to leave the academic world and go into industry early.
MIT coming out and calling for the paper's retraction certainly hampers that plan. They could leave it up and hope that some future employer is so enamored with their resume that nobody does a Google search about it, but eventually one of their coworkers is going to notice.
There are a gazillion small companies out there that hire white collar workers with only a rudimentary background check (are they a felon) and an interview that is more a vibe check than anything.
He probably will never be someone of significance, but he also will probably be able to have a standard middle class life.
But would you want to work for a company that just does a vibe check, or one that raises the bar with every hire?
That high-level Apple employee was probably a manager and oversaw hiring people.
I would tell myself every day, "I wouldn't hire me."
It's not self-defeating.
It's not being a victim.
I wouldn't let it stop me from trying.
It's being accurate about what kind of company you'd want to build yourself, and the internal state of a lot of hiring managers. And with a true model of the world you can make better decisions.
> will he be able to find some sort of white-color work in the future or will he be condemned to retail or food-service employment.
Lay low for a year, work on some start-up-ish looking project, then use his middle name to get hired at one of the many AI startups? (only half joking)...
Stephen Glass, the dude who fabricated stories for New Republic back in the late 90s, has attempted at least twice to become an attorney after going to law school. Both New York and California denied his bar applications on the grounds that he failed the standards for moral character. He nonetheless seems to be employed by a law firm, but not as a practicing attorney.
Or, as in the case of the LaCour/UCLA kid, a lot of outfits will agree with the ends if not the means. Still, getting caught doing this has to close 100 doors for every 1 door it opens.
This makes me think about the credibility of single-author vs. multi-author papers in different disciplines. In computer science, a paper is seen as suspicious if there's just one author (at least nowadays). But in economics it seems much more common. Can an economist explain this for me (or perhaps a paper written by multiple economists?)
While in CS multi-author papers are the norm, in econ it is quite common to have solo-authored papers, with senior PhD students solo-authoring papers without their advisor as a coauthor. And it is rare to see an econ paper with more than 3 authors.
As a CS PhD who has worked with many economists, my understanding is that the culture sees it as diluting credit, and so might put people in the acknowledgements where computer science folks would add them as authors.
A study that cannot be replicated is a study that cannot be falsified. Authors don't mind putting their names on them because there's no accountability to be held and is purely net positive (one more publication and additional citations).
While poast works, I strongly feel most people on this site are not aligned with what poast represents. Graf has to spend a lot of time (and time = money) getting accounts set up to run nitter, and it would suck if the views of poast users wound up costing the internet a twitter mirror.
I strongly recommend people not investigate this unless they think 4chan is quaint. As in, if the reason you are not using X is because of the outrage at Elon and the "typical user" of X, then maybe use xcancel instead.
oh, totally. I was hoping to avoid people going "i wonder what poast is", but Graf said they've been backlinked before and it's no big deal; but i can't remove my earlier comment.
Reading the paper (which is still up) ... the "AI" (sigh) tool described there would not have been particularly novel or unusual, even if the research was conducted several years ago. ML + inverse design for materials has been used for decades.
Actually arXiv is moderated and if policies are violated they may even withdraw* a paper themselves, if it wasn't declined to be published in first place. Regarding policies, it's mentioned that a "submission may be declined if the moderators determine it lacks originality, novelty, significance, and/or contains falsified, plagiarized content or serious misrepresentations of data, affiliation, or content."
*Note that this creates a new version lacking any download which also becomes the default but any previous ones are still available.
Suppose that arXiv withdraws it and says the reason is fraud. What if it turns out to not be fraud? Either way, what if the author sues for libel? Why should arXiv spend resources evaluating papers after they've already been published on the arXiv? It's just inviting all the issues stackoverflow and the youtube copyright strike system have.
Judging quality/fraud is the role of a journal/conference, not arXiv. If a paper gets rejected does it come off arXiv? No. If a paper is never submitted does it come off? No. If a paper is retracted, does it come off? No. ArXiv should avoid making as many subjective determinations as possible.
I agree with this, it's actually a good reminder not to trust a preprint server. Arxiv already has an inappropriate air of validity, moderation will only make it worse.
(Incidentally, I don't think misplaced trust in preprints is much of an academic issue, people that are experts in their field can easily judge quality for themselves. It's laypeople taking them at face value that's the problem.)
>Earlier this year, the COD conducted a confidential internal review based upon allegations it received regarding certain aspects of this paper. While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we are writing to inform you that MIT has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper. Based upon this finding, we also believe that the inclusion of this paper in arXiv may violate arXiv’s Code of Conduct.
It sounds like "we don't like it and won't tell you why, we're hiding behind MIT policy and vague notions of privacy".
MIT should just demonstrate in a paper what the shortcomings are and print it, adding it to the citation tree of the original.
Looking very briefly at the paper and speculating wildly, I could imagine that the company who were subject of it - or their staff - might not appreciate it and have put pressure on MIT??
Solid amount of Streisand Effect going on here -- lots of attention has been bought to the paper (and that is everything after all!).
> It sounds like "we don't like it and won't tell you why, we're hiding behind MIT policy and vague notions of privacy".
FERPA is federal law. It is quite likely that MIT is legally bound to not release some pieces of evidence which are crucial in this case (hypothetically, for example: that the student's educational record is inconsistent with claims made in the paper).
> Looking very briefly at the paper and speculating wildly, I could imagine that the company who were subject of it - or their staff - might not appreciate it and have put pressure on MIT??
The apparent issue is that the data appears to have been entirely fabricated and is a lie. The author appears to simply be a fraud
you'd think that such a widely cited fraudulent paper might have caused problems in other research, but probably nobody who cited it actually read it anyway, so. it's turtles all the way down.
That looks like the kind of paper that causes companies to lose lots of money by hyping up what is likely a less-than-impressive method. But it does not make any theoretical claims, so it cannot contaminate research.
I've attempted to put a neutral title at the top of this page. If someone can come up with a better (i.e. more accurate and neutral) one, we can change it again.
(Since press release titles about negative news tend to studiously avoid saying anything, we tend to classify them in the "misleading" bucket of https://news.ycombinator.com/newsguidelines.html, which justifies rewriting them.)
Perhaps replace "take down" with "withdraw" (arXiv's mechanism to deal with bad papers post-publication; what MIT calls for) or "retract" (the mechanism that traditional journals employ and similar to previous; a common term in academia). In arXiv's way of handling papers, "removal" and "withdraw" are distinct[0] and, based on some comments in this thread, current title seems to create confusion that is about the former.
[0]: https://info.arxiv.org/help/withdraw.html: "Articles that have been announced and made public cannot be completely removed. A withdrawal creates a new version of the paper marked as withdrawn."
If I've learned one thing using AI, it's that other people's experiences are not particularly relevant. So this paper is a meh! for me whatever its provenance.
"I don't endorse this paper. Therefore you should take it down. I won't tell you why. Trust me bro."
Whether MIT is right or wrong, the arrogance displayed is staggering. The only thing more shocking is that obviously this behavior works for them and they are used to people obeying them without question because they are MIT.
More like "Because it involves a student, FERPA won't allow us to legally disclose what's going on, but we kicked the student out so you should take the hint and realize what was going on"
The arrogance of MIT is staggering? I would say the arrogance of paper's author is 10x as staggering that if what Robert Palgrave has suggested is true.
I think MIT is trying to protect its reputation as a would-be place of fraud-free research, unlike Harvard.
I'm sure this works for other institutions also, not just MIT. Maybe the evidence they have for the request requires disclosing data that violates FERPA, which they obviously aren't allowed to do.
They become aware of academic dishonesty where a student tried to publish a paper with faked data, and taking it down is disappointing posturing? What?
which becomes part of the record. Can put a banner up top. Expunging takes it out of the record, folks can't trace the history of it. Also, I didn't say 'posturing' which has a particular meanring, I said it's 'a posture'.
Between this and the subtle reference to “former second-year PhD student” it makes sense that they’d have to make a public statement.
They do a good job of toeing the required line of privacy while also giving enough information to see what’s going on.
I wonder if the author thought they could leave the paper up and ride it into a new position while telling a story about voluntarily choosing to leave MIT. They probably didn’t expect MIT to make a public statement about the paper and turn it into a far bigger news story than it would have been if the author quietly retracted it.
reply