Hacker News new | past | comments | ask | show | jobs | submit | best comments login

> To accomplish that feat, the treatment is wrapped in fatty lipid molecules to protect it from degradation in the blood on its way to the liver, where the edit will be made. Inside the lipids are instructions that command the cells to produce an enzyme that edits the gene. They also carry a molecular GPS — CRISPR — which was altered to crawl along a person’s DNA until it finds the exact DNA letter that needs to be changed.

That is one of the most incredible things I have ever read.


How did these clowns manage to make my mouse cursor laggy? It is incomprehensible for me to live in such a big bubble with such a big paycheck and then spend zero brainpower on systems without graphics acceleration.

This is extremely bad engineering and these engineers should be called out for it. It takes a special kind of person to deliver this and be proud of it.

Once they made their millions at Google these engineers will be our landlords, angel investors, you name it. The level of ignorance is unfathomable. Very sad.


On the off-chance someone at Apple reads this, I'll repeat my perennial beg that Apple stops popping up 'Give me your (local admin) password right now' dialogs randomly throughout the day because the computer has a hankering to install updates or something.

Anyone with basic skills can whip up a convincing replica of that popup on the Web, and the "bottom 80%" (at least) of users in technical savvy would not think to try dragging it out of the browser viewport or switching tabs to see if it is fake or real.

The only protection against this kind of stuff is to NOT teach users that legitimate software pops up random "enter your password" dialogs in front of your work without any prompting. That's what these dialogs are doing.

Display a colorful flashing icon in the menu bar. Use an interstitial secure screen like Windows does. Whatever. But the modern macOS 'security' UI is wildly bad.


I am the author of this piece, and i didn't share it to HN, I don't hang out here. I just gotta say wow, tough crowd. i wrote this piece from an emotionally low point after another fruitless day of applying to jobs. I didn't have a particular agenda in mind. I was voicing what i've been through and some of what I was experiencing with no expectations.

you'll notice in the comments section that the population of substackistan is much less FUCKING CYNICAL AND NEGATIVE than you guys, with many commenters saying they are in the same position. I heard from writers, designers, engineers, going through similar times.

my portfolio site is https://shawnfromportland.com, you can find my resume there. if you have leads that you think I might match with you can definitely send them my way, I will even put a false last name on an updated resume for you guys.

for those who are wondering, I legally changed my name to K long ago because my dad's last name starts with K, but I didn't like identifying with his family name everywhere i went because he was not in my life and didnt contribute to shaping me. I thought hard about what other name I could choose but nothing resonated with me. I had already been using Shawn K for years before legally changing it and it was the only thing that felt right.


(I work at Mozilla, but not on the VCS tooling, or this transition)

To give a bit of additional context here, since the link doesn't have any:

The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.

In the short term the mercurial servers still exist, and are synced from GitHub. That allows automated systems to transfer to the git backend over time rather than all at once. Mercurial is also still being used for the "try" repository (where you push to run CI on WIP patches), although it's increasingly behind an abstraction layer; that will also migrate later.

For people familiar with the old repos, "mozilla-central" is mapped onto the more standard branch name "main", and "autoland" is a branch called "autoland".

It's also true that it's been possible to contribute to Firefox exclusively using git for a long time, although you had to install the "git cinnabar" extension. The choice between the learning hg and using git+extension was a it of an impediment for many new contributors, who most often knew git and not mercurial. Now that choice is no longer necessary. Glandium, who wrote git cinnabar, wrote extensively at the time this migration was first announced about the history of VCS at Mozilla, and gave a little more context on the reasons for the migration [1].

So in the short term the differences from the point of view of contributors are minimal: using stock git is now the default and expected workflow, but apart from that not much else has changed. There may or may not eventually be support for GitHub-based workflows (i.e. PRs) but that is explicitly not part of this change.

On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.

[1] https://glandium.org/blog/?p=4346


There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.

It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.

This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].

It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.

[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...


As someone who works professionally on embedded software devices that update over the internet, car companies are stuck not because they can't get software talent, but because they have no ability to actually build the electronics alongside the software, which is ultimately what constrains embedded software. Without the right hardware, the constraints are just insurmountable, you can not do X feature because board A doesn't have the API to your MCU, or it runs some dogshit speed communication system that means you have 500ms lag. The feature is just unworkable, and if the PMs push it anyways you get what happens for the legacy car makers, terrible underpowered infotainment systems with no central design philosophy, stuck in an awkward, bad, middle between a full software stack and all buttons for everything. Their model of integrating 3rd party vendor computers just doesn't really work for this kind of thing; Tesla, Rivian, and the Chinese EV makers all manufacture all their own electronics, which lets them achieve the outcome. But you can not just roll all your own electronics in a year.

The expressed goal is emotionally impacting UX. They clearly got strong emotions out of you. Mission accomplished!

It's nice to see a paper that confirms what anyone who has practiced using LLM tools already knows very well, heuristically. Keeping your context clean matters, "conversations" are only a construct of product interfaces, they hurt the quality of responses from the LLM itself, and once your context is "poisoned" it will not recover, you need to start fresh with a new chat.

One other fun part of gene editing in vivo is that we don't actually use GACU (T in DNA). It turns out that if you use Pseudouridine (Ψ) instead of uridine (U) then the body's immune system doesn't nearly alarm as much, as it doesn't really see that mRNA as quite so dangerous. But, the RNA -> Protein equipment will just make protiens it without any problems.

Which, yeah, that's a miraculous discovery. And it was well worth the 2023 Nobel in Medicine.

Like, the whole system for gene editing in vivo that we've developed is just crazy little discovery after crazy little discovery. It's all sooooo freakin' cool.

https://en.wikipedia.org/wiki/Pseudouridine


I've transcended the vanilla/framework arguments in favor of "do we even need a website for this?".

I've discovered that when you start getting really cynical about the actual need for a web application - especially in B2B SaaS - you may become surprised at how far you can take the business without touching a browser.

A vast majority of the hours I've spent building web sites & applications has been devoted to administrative-style UI/UX wherein we are ultimately giving the admin a way to mutate fields in a database somewhere such that the application behaves to the customer's expectations. In many situations, it is clearly 100x faster/easier/less bullshit to send the business a template of the configuration (Excel files) and then load+merge their results directly into the same SQL tables.

The web provides one type of UI/UX. It isn't the only way for users to interact with your product or business. Email and flat files are far more flexible than any web solution.


This is not "malicious compliance", this is more like "pedantic enforcement".

"Malicious compliance" would be if the same team booked a 50min meeting then a 10min meeting in the same room.


Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.

Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.

Pretty cool that Linus Torvalds invented a completely distributed version control system and 20 years later we all use it to store our code in a single place.

We feel your pain at Nextcloud. Our team at Everfind (unified search across Drive, OneDrive, Dropbox, etc.) has spent the past year fighting for the *drive.readonly* scope simply so we can download files, run OCR, and index their full-text for users. Google keeps telling us to make do with *drive.file* + *drive.metadata.readonly*, which breaks continuous discovery and cripples search results for any new or updated document.

Bottom line: Googles "least-privilege" rhetoric sounds noble, but in practice it gives Big Tech first-party apps privileged access while forcing independent vendors to ship half-working products - or get kicked out of the Play Store. The result is users lose features and choices, and small devs burn countless hours arguing with a copy-paste policy bot.


Strongly recommend this blog post too which is a much more detailed and persuasive version of the same point. The author actually goes and builds a coding agent from zero: https://ampcode.com/how-to-build-an-agent

It is indeed astonishing how well a loop with an LLM that can call tools works for all kinds of tasks now. Yes, sometimes they go off the rails, there is the problem of getting that last 10% of reliability, etc. etc., but if you're not at least a little bit amazed then I urge you go to and hack together something like this yourself, which will take you about 30 minutes. It's possible to have a sense of wonder about these things without giving up your healthy skepticism of whether AI is actually going to be effective for this or that use case.

This "unreasonable effectiveness" of putting the LLM in a loop also accounts for the enormous proliferation of coding agents out there now: Claude Code, Windsurf, Cursor, Cline, Copilot, Aider, Codex... and a ton of also-rans; as one HN poster put it the other day, it seems like everyone and their mother is writing one. The reason is that there is no secret sauce and 95% of the magic is in the LLM itself and how it's been fine-tuned to do tool calls. One of the lead developers of Claude Code candidly admits this in a recent interview.[0] Of course, a ton of work goes into making these tools work well, but ultimately they all have the same simple core.

[0] https://www.youtube.com/watch?v=zDmW5hJPsvQ


It's like reading "A Discipline of Programming", by Dijkstra. That morality play approach was needed back then, because nobody knew how to think about this stuff.

Most explanations of ownership in Rust are far too wordy. See [1]. The core concepts are mostly there, but hidden under all the examples.

    - Each data object in Rust has exactly one owner.
      - Ownership can be transferred in ways that preserve the one-owner rule.
      - If you need multiple ownership, the real owner has to be a reference-counted cell. 
        Those cells can be cloned (duplicated.)
      - If the owner goes away, so do the things it owns.

    - You can borrow access to a data object using a reference. 
      - There's a big distinction between owning and referencing.
      - References can be passed around and stored, but cannot outlive the object.
        (That would be a "dangling pointer" error).
      - This is strictly enforced at compile time by the borrow checker.
That explains the model. Once that's understood, all the details can be tied back to those rules.

[1] https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...


That symmetrical registration/cancellation is being slow-walked like this is absurd (but under this admin, certainly not surprising).

Also, they still expect you to authenticate when they phone you. No, I'm not going to tell you my birthday when you phone me. No wonder so many people get scammed, when banks are training people on how to get scammed.

Research funded by the NIH which our government is actively gutting

Airbnb made the same mistake Google did: They screwed up their core service. I used to be a steady ABB customer but now hotels are almost always cheaper, offer better service, and are more predictable.

Not to mention that hotel websites are typically easier to navigate and contain a lot less React-sludge that makes every click take forever to respond.


Oh my god, this is ugly as fuck.

It reminds me a study about the perception of beauty among students of arts.

Before they start their studies, their perception of beauty is similar to everyone's.

But as they go through their course, their perception starts to shift. What they see as "beautiful" doesn't match the perception of others.

They learn what "skeuomorphism" is, and suddenly everything must be flat and undifferentiated.


If your arrays have more than two dimensions, please consider using Xarray [1], which adds dimension naming to NumPy arrays. Broadcasting and alignment then becomes automatic without needing to transpose, add dummy axes, or anything like that. I believe that alone solves most of the complaints in the article.

Compared to NumPy, Xarray is a little thin in certain areas like linear algebra, but since it's very easy to drop back to NumPy from Xarray, what I've done in the past is add little helper functions for any specific NumPy stuff I need that isn't already included, so I only need to understand the NumPy version of the API well enough one time to write that helper function and its tests. (To be clear, though, the majority of NumPy ufuncs are supported out of the box.)

I'll finish by saying, to contrast with the author, I don't dislike NumPy, but I do find its API and data model to be insufficient for truly multidimensional data. For me three dimensions is the threshold where using Xarray pays off.

[1] https://xarray.dev


The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.

What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.

There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.


It’s probably time to channel larry ellison and shake these guys down. Or at least shake their pockets for loose change.

They are stealing from you. As you point out you go out of your way to help companies with your oss options: you’re way on the right side of principled and generous. this is abuse. Don’t put up with it.

Given the history, I’d suggest a short C&D recounting the 10 years(!) of theft, the measures they’ve gone to, and tell them they have 15 days to either stop or get licensed, or you will seek 10 years of back licensing, interest and penalties. I assure you that you will receive a call from someone. Especially if you have to turn the software off on day 16.

Anyway this seems substantial to me, but also there’s an ethical and philosophical question of responsibilities. Do you have more responsibility to your employees and shareholders or to this space company? Even if you’re crazy rich as a company, I propose as the CEO you owe a pretty strong duty to those stakeholders to try and recover stolen assets. You don’t have to be mad at random spaceco, but I propose you might think hard before walking away.

Quick edit: just to frame your head on this: If the company is in the US then this behavior likely falls under DMCA anti-circumvention laws. if it does, people would have criminal liability. Now, I believe the DMCA is terrible legislation; it lets corporations create criminal liability through license agreements. But, it is the law of the land here, and I would guess as soon as your attorney can lay this out, and their attorneys get an eye on it, you will find willing negotiation happening.


Material Design v1 cracked it. It was simple to implement, simple to understand and simple to use. Minimal overheads with a clear content-first approach.

"It's time to move beyond “clean” and “boring” designs to create interfaces that connect with people on an emotional level."

I don't want websites and apps to connect with me on an emotional level. I want to turn my phone/computer on, use the app/program to achieve what I'm trying to do, and turn it off again, so I can get back to the real world.


> We’re not going to waste days chasing them. But at some point, this goes beyond saving a few bucks: it becomes performance art.

Oh for the love of tech, do chase them. This absolutely has to be in void of the terms of your trial take them to court. If not, then at the very least name and shame the company, so some dumb manager orchestrating this silly theft will get fired and someone more mature can be rotated in.


We have normalized the treatment of the financial and payments systems as things that exist primarily to perform law enforcement surveillance functions. It's the same dynamic that leads to debanking of small accounts - payments firms exist on thin margins and the potential fines for inadvertently servicing a bad actor are stratospheric, so it's entirely logical to play it safe by refusing to service anyone whose profile looks even the slightest bit risky.

  “A student asked, ‘Yeah, but do the wrinkles always form in the same way?’ And I thought: I haven’t the foggiest clue!” said German, a faculty member at the Thomas J. Watson College of Engineering and Applied Science’s Department of Biomedical Engineering. “So it led to this research to find out.”
I wish the authors would have mentioned the kid by name in the acknowledgement section of the paper. I bet the kid would have felt very proud and inspired to having their name published in a scientific journal.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: