Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad. And you're still ahead of Go, and 10x or more ahead of Python and Ruby.
Java syntax isn't perfect, but it is consistent, and predictable. And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc), it's just pressing control-space all day and you're fine.
Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
What do you get for all of these tradeoffs? Speed, memory safety. But with that you still still have dynamic invocation capabilities (making things like interception possible) and hotswap/live redefinition (things that C/CPP cannot do).
Perfect? No, but very practical for the real world use case.
> And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc),
Java's tools are really top notch. Using IntelliJ for Java feels a whole new different world from using IDEs for other languages.
Speaking of Go, does anyone know why Go community is not hot on developing containers for concurrent data structures? I see Mutex this and lock that scattering in Go code, while in Java community the #1 advice on writing concurrency code is to use Java's amazing containers. Sometimes, I do miss the java.util.concurrent and JCTools.
I'll offer a counterpoint to the responses. Until go got generics, concurrent data structures were awkward. The stdlib now does include things like sync.Map.
In fact my experience has been that overuse of channels is a code smell that alot of new go developers fall into and later regret. There's a reason the log package uses a mutex for synchronization.
In general I think channels are great for connecting a few large chunks of your program together. Concurrency is great but also not every function call benefits from being turned into a distributed system.
I think that it would be a great idea to develop more concurrent go data structures with generics and I suspect inertia is what's keeping the community from doing it.
My credentials such as the are: been writing go since 1.0. worked at Google and taught go classes as well as owned some of the original go services (the downloads server aka payload server).
Yeah, even with channels and goroutines, I’d imagine we can encapsulate them as primitives in containers like concurrency literatures often advocate. Case in point, it was fun to go through Pike’s talk Go Concurrency Patterns, yet I’m not sure all the patterns he discussed in the talk would be that simple to implement compared to just declaratively using well-packed containers
Don't communicate by sharing memory; share memory by communicating.
The overuse of Mutex and Lock are from developers bringing along patterns from other language where they are used to communicating via shared memory. So this aspect of the language just doesn't click as well for many people at first. How long it takes you to get it depends on your experience.
My experience is a shocking amount of the golang community believe channels are a performance problem for whatever they're doing, and they use mutexes in some misguided effort at optimization.
But then I have also encountered Rust people that will look down on Java but had no idea buffered I/O had higher throughput than unbuffered.
Depending on the situation, channels can absolutely be higher overhead and not worthwhile. Google internally recommends not using them in many situations.
Unbuffered IO is a tradeoff. For certain use cases it does help, because throughput isn't everything. I'm sure Buffered is better in the average use case, but that doesn't mean you would never need unbuffered.
> Depending on the situation, channels can absolutely be higher overhead and not worthwhile.
Like streaming arrays one byte at a time through the channel.
Such devs just aren't very good, and hear "Google internally recommends not using them in many situations" but jump to inferring that means all of their situations qualify.
> but that doesn't mean you would never need unbuffered.
Channels make the most sense when you need to decouple code. The performance differences rarely are a bit deal. Most of the time it's unnecessary cognitive overhead. People treat both sides as dogma, but the truth is that you need to be think critically when choosing between the two.
Thanks! What about data structure shared by multiple goroutines? Say, an in-memory cache object? Or do we always have multiple goroutines talk to a dedicated goroutine for the shared value? Would the performance be okay for all use cases? A reason that people use JCTools is that it can easily support hundreds of millions of concurrent reads/writes to its data structures on a 10-year old laptop.
For things like cache I generally have 2 goroutines communicating with it. One that directs reads and one that directs writes. Using CSP style you can pass the data (by value) through to the cache (or any other CSP style process) without copying and it performs quite well. I've written several high performance systems in this way with great results.
The only replacement for locks/mutexes is a lock free data structure. Locks are not what make concurrency possible, they are what makes it data-safe.
You can use platform threads, user-space threads, language-provided "green" threads, goroutines, continuations or whatever you wish for concurrency management, but that's almost orthogonal to data safety.
I think they’re using it in the context of what you’ll get without some degree of locking, which is data corruption and/or other issues.
It’s not that you need locking to use threads. You need locking to stop threads from ruining any shared resource/data they are both trying to touch at the same time.
It's probably possible to do if you think about it carefully but generally enqueuing a message is going to take a lock, especially if you can send an arbitrary number of messages (which may require the queue to be reallocated).
One very common queue implementation you can use to implement actors is the crossbeam-deque. It's work-stealing in nature, works in multi-threaded environments and has no locks. The implementation is quite simple to follow:
When I got out of college and was still firmly in the "Java is the solution to everything" mentality I didn't realize that my admiration was really for the JVM and the Java App Server tooling that was so much more advanced than anything else at the time. It was basically Docker + K8s for anything running on the JVM more than 2 decades earlier.
Java the language eventually drove me away because the productivity was so poor until it started improving around 2006-2007.
Now I keep an eye on it for other languages that run on the JVM: JRuby, Clojure, Scala, Groovy, Kotlin, etc.
IMO JRuby is the most interesting since you gain access to 2 very mature ecosystems by using it. When Java introduced Project Loom and made it possible to use Ruby's Fibers on the JVM via Virtual Threads it was a win for both.
Charles Nutter really doesn't get anywhere close to enough credit for his work there.
You can take pretty much any code written for Java 1.0 and you can still build and run it on Java 24. There are exceptions (sun.misc.Unsafe usage, for example) but they are few and far between. Moreso than nearly any other language backwards compatibility has been key to java. Heck, there's a pretty good chance you can take a jar compiled for 1.0 and still use it to this day without recompiling it.
Both Ruby and Python, with pedigrees nearly as old as Java's, have made changes to their languages which make things look better, but ultimately break things. Heck, C++ tends to have so many undefined quirks and common compiler extensions that it's not uncommon to see code that only compiles with specific C++ compilers.
> Moreso than nearly any other language backwards compatibility has been key to java.
The Java 8 and 8+ divide very much works against this. It was a needed change (much like Python 2 vs 3) but nowhere near pleasant, especially if some of your dependencies used the old Java packages that were removed in, say, OpenJDK 11.
Furthermore, whenever you get the likes of Lombok or something that has any dynamic compilation (I recall Jaspersoft having issues after version upgrade, even 7 to 8 I think), or sometimes issues with DB drivers (Oracle in particular across JDK versions) or with connection pooling solutions (c3p0 in particular), there's less smooth sailing.
Not to say that the state of the ecosystem damns the language itself and overall the language itself is pretty okay when it comes to backwards compatibility, though it's certainly not ideal for most non-trivial software.
I have C++ code from 1997 that I occasionally compile. So far it runs. 10 yeas ago compiling with -Wall exposed an inconsequential bug and that was it. I suspect when it stops to compile it will be from an absence of a particular X11 library that I used to parse a config in otherwise command-line utility.
Which also points to another thing where Java compatibility shines. One can have a GUI application that is from nineties and it still runs. It can be very ugly especially on a high DPI screen, but still one can use it.
Yeah, that and the portability are really incredible and underrated. It is funny, because I constantly hear things like "write once, debug everywhere", but I have yet to see an alternative that has a higher probability of working everywhere.
Although Python is pretty close, if you exclude Windows (and don't we all want to do that?).
I can run basically any Perl code back to Perl 4 (March 1991) on Perl 5.40.2 which is current. I can run the same code on DOS, BeOS, Amiga, Atari ST, any of the BSDs, Linux distros, macOS, OS X, Windows, HP/UX, SunOS, Solaris, IRIX, OSF/1, Tru64, z/OS, Android, classic Mac, and more.
This takes nothing away from Java and the Java ecosystem though. The JVM allows around the same number of target systems to run not one language but dozens. There’s JRuby, Jython, Clojure, Scala, Kotlin, jgo, multiple COBOL compilers that target JVM, Armed Bear Common Lisp, Eta, Sulong, Oxygene (Object Pascal IIRC), Rakudo (the main compiler for Perl’s sister language Raku) can target JVM, JPHP, Renjin (R), multiple implementations of Scheme, Yeti, Open Source Simula, Redline (Smalltalk), Ballerina, Fantom, Haxe (which targets multiple VM backends), Ceylon, and more.
Perl has a way to inline other languages, but is only really targeted by Perl and by a really ancient version of PHP. The JVM is a bona fide target for so many. Even LLVM intermediate code has a tool to target the JVM, so basically any language with an LLVM frontend. I wouldn’t be surprised if there’s a PCode to JVM tool somewhere.
JavaScript has a few languages targeting it. WebAssembly has a bunch and growing, including C, Rust, and Go. That’s probably the closest thing to the JVM.
> I can run basically any Perl code back to Perl 4 (March 1991) on Perl 5.40.2 which is current.
Yes, but can you _read_ it?
I'm only half joking. Perl has so many ways to do things, many of them obscure but preferable for specific cases. It's often a write-only language if you can't get ahold of the dev who wrote whatever script you're trying to debug.
I wonder if modern LLMs could actually help with that.
> I can run THE SAME CODE on DOS, BeOS, Amiga, Atari ST, any of the BSDs, Linux distros, macOS, OS X, Windows, HP/UX, SunOS, Solaris, IRIX, OSF/1, Tru64, z/OS, Android, classic Mac, and more.
No, you really can't. Not anything significant anyway. There are too many deviations between some of those systems to all you to run the same code.
I wonder what other languages run on the JVM. What about Perl, Icon, SNOBOL, Prolog, Forth, Rexx, Nim, MUMPS, Haskell, OCaml, Ada, Rust, BASIC, Rebol, Haxe, Red, etc.?
Partly facetious question, because I think there are some limitations in some cases that prevent it (not sure, but a language being too closely tied to Unix or hardware could be why), but also serious. Since the JVM platform has all that power and performance, some of these languages could benefit from that, I'm guessing.
I often run into problems running Python code under Linux.
I don’t know if it is a me problem or if I’m missing the right incantations to set up the environment or whatever. Never had that much problems with Java.
But I’m a Java and Ruby person so it might really be missing knowledge.
It's not you. Python packaging has regressed into a worse mess than it was 20 years ago. I limit myself to simple scripts that only rely on builtins. Anything more complicated goes to a more dependable language.
I rarely run into issues when using Poetry. If you use pip, add packages to requirements.txt willy-nilly and don't pin versions then you are asking for trouble.
UV. Using it as the exec target for python (UV script) is great. Dependencies declared at the top, now I have executable files in something better than bash.
I no longer shy away from writing <500 LOC utility/glue scripts in python thanks to uv.
I don't know about the difference between 20 years ago versus now, but it's certainly doesn't seem to be clear now.
e.g. poetry, venv and pyenv have been mentioned in just the next few comments below yours. and this is just one example. i have seen other such seeming confusion and different statements by different people about what package management approach to use for python.
For anything more than just a one off script, look into venv. I’ve not written any python until this past year and can’t imagine maintaining an ongoing project without it.
Prior to that I would frequently have issues (and still have issues with one-off random scripts that use system python).
As late as 2022, I was at a company still in the middle of "migrating" from 2 to 3. I wouldn't be surprised if the migration project was still going on. The system had gone beyond tech debt and was bordering on bankruptcy.
Python 3 came out in 2008. If the 2 vs 3 differences are still biting you you probably have bigger problems to solve (deprecated, insecure, unmaintained dependencies for example).
Honestly, it isn't just you. I had to hold off on 3.13 for quite a while too, because of various package conflicts. It isn't terrible, especially thanks to things like pyenv, but it is far from perfect.
I just spent 30 minutes trying to get a python package running on my Mac... Not feeling that. Pythons version compatibility is just awful and the mix of native is deeply problematic. Don't get me started on tooling and observability.
I know that what you said is supposed to be true. However in my real world experience it is anything but. Cisco java programs are a disaster and require certain JVMs to run.
The enterprise Java applications we use require specific versions of specific Linux distros. It's possible that they would run on other distros, or even other operating systems, if you got the right JVM. But there's enough question about it that the companies that sell them for a substantial price aren't willing to promise even a little portability.
It always made me wonder why I hear about companies who are running very old versions of Java though. It always seemed like backwards compatibility would make keeping up to date with the latest an almost automatic thing.
It is "mostly" backwards compatible. Applets and everything related to them where dropped. A few interface dependencies where changed to improve modularity of the runtime. Widely used hacks like sun.misc.unsafe are getting replaced with official APIs and locked down. Development of some Java EE packages has been taken over by a third party, so they are no longer packaged within the java namespace. To name just a few of the bigger examples.
That's not the virtues of Java the language. That's the virtues of Java the backward-compatible platform. That is, you didn't say anything about the language (syntax and semantics), you only talked about backward compatibility.
(It's still a valid point. It's just not the point you labeled it as.)
I have the exact opposite experience. I haven't coded much Java, but when I tried to revisit it, code I wrote 10 or 20 years ago doesn't even remotely compile anymore.
While with C++, 20 years later you may need to add a missing #include (that you were always supposed to have), but then it just works as it always has.
Java is, in my opinion, a complete mess. And I think it's weird how anybody could like it past the 1990s.
C++ not being compilable later hasn't been true since pre standard C++. We're talking 1980s now.
Entity Beans were terrible, representing the height of JEE over complexity. I remember editing at least 3 classes, a couple interfaces, and some horrific XML deployment descriptors to represent an "entity." A lot of the tooling was proprietary to the specific app server. On top of that, it was slow.
In the early 2000's, I used to work on JEE stuff for my day job, then go home and build PHP-based web apps. PHP was at least 10x more productive.
The worst thing about EntityBeans is they were so bad they made Hibernate look good, which led people to think it was good. After 10 years of hammering against ORM complexity I finally switched to using thin database wrapper layers and have not once ever regretted it.
Hibernate... a real PITA every time the application needed something beyond basic single-table CRUD queries; sadly for me it happened 99% of the times.
After some months of torture, plain JDBC with their stupid checked exceptions was refreshing, even without wrappers.
You have to keep in mind that entity beans were developed in a time before generics, annotations, and widespread use of byte code enhancement that made a lot of the easy, magical stuff we take for granted possible.
I remember. During the same time period, I wrote some Java apps that used plain old JDBC, plus some of my own helper functions for data mapping. They were lighter weight and higher performance compared to the "enterprise" Java solutions. Unfortunately they weren't buzzword compliant though.
Full JEE, not just servlets, performance, reloading, and a bunch of enterprise features. Resin existed.
After about 10+ years Spring kind of took over JEE.
Omg. Spring was just like moving code to xml. Different now, but still.
What I miss from JEE:
- single file ear/war deployment, today that’s a docker image
- the whole resource api from Java (filesystem/jar doesn’t matter). It means you don’t necessarily have to unpack the jar
- configuration / contexts for settings, + UI for it, database connections etc. Docker kind of works, most most images fail to achieve this. Docker compose kind of takes care of things.
> I personally don’t care for Java, but I have bills so it’s always in my back pocket. Things happen , sometimes you need to write Java to eat.
I write Java to pay bills and my eyes and fingers thank me everyday for sparing them from a sea of if err != nil. I won't even go(!) into the iota stupidly compared to Java's enums.
> Java syntax isn't perfect, but it is consistent, and predictable
This is something I greatly value with the recent changes to Java. They found a great way to include sealed classes, switch expression, project Loom, records that feels at home in the existing Java syntax.
The tooling with heap dump analyzers, thread dump analyzers, GC analyzers is also top notch.
I think Gavin Bierman is an unsung hero as far as steering the Java language is concerned. I had the privilege to sit next to him in the office when I was working on TruffleRuby, and our conversations on language design were always elucidating, and his ability to gently steer others away from pitfalls was something I wish I could do as well.
Hearing the work he and others did to gradually introduce pattern matching without painting themselves into a corner was fascinating and inspiring.
> Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad.
When Java got popular, around 1999-2001, it was not a close third behind C (or C++).
At that time, on those machines, the gap between programs written in C and programs written in Java was about the same as the gap right now between programs written in Java and programs written in pure Python.
And yet many of us embraced Java, it became the chosen language to teach distributed systems in many Portuguese universities around 1998, because of the pain to write portable C or C++ code across UNIX clones.
A mix of K&R C, C89, C++ARM compilers catching up with WG21 work, POSIX flavours, and lovely autoconf scripts.
Also as much as I complain about the lack of JIT tooling in CPython, many people forget that usually most people don't have any issues in polyglot codebases, it isn't a zero-sum game, and various forms of FFI and language bindings exist.
That is one of the things I have been doing across JVM/ART, CLR, V8 for decades now, moreso if we include Perl and Tcl into the mix, I seldom write full blown 100% C or C++ code, when I reach to them, is to write native libraries or implement language bindings.
Third best consistently used over 3 decades adds up to a great, great deal. Although, to be fair, a great deal has been invested in cutting edge GCs for the JVM – some fresh out of research.
I really like the JetBrains IDEs, both for Java and .NET, it feels way more pleasant to write code and refactor it than just in something like Visual Studio Code (which still feels better for utility scripts). They have pretty good run profiles that you can put in your repo, pretty good debugger, plenty of suggestions/inspections and customizability.
Java, .NET and languages like that also lend themselves pretty well to tools (and even LLMs) understanding what's going on, their runtimes are pretty good and platform differences don't give you too many issues from what I've seen. Though when it comes to the frameworks or libraries you might use (e.g. how often you will see the likes of Spring Boot in enterprise projects) will definitely leave some performance on the table [1] and have plenty of awkward and confusing situations along the way [2], especially if you're unlucky enough to have to work on codebases that have been around for over a decade. Old Java projects really suck sometimes, though maybe that applies to many of the old projects, regardless of tech.
Overall, I quite like them and am pretty productive, plus I unironically think that using Maven is pleasant enough (even adding custom repos isn't too convoluted [3]) and the modern approach of self contained .jar files that can just be run with a JDK install instead of separately having to manage Tomcat or GlassFish (or TomEE or Payara nowadays, I guess) is a step in the right direction!
Though while I do like packages such as Apache Commons and they're very useful, I also very much enjoy using something like Go more recently, it's easier to get started making simple web apps with it, less ceremony and overhead, more just getting to writing code.
[2] https://blog.kronis.dev/blog/it-works-on-my-docker (a short rant of mine, but honestly I very much prefer when you have configuration done explicitly in the code, not some layered abstractions with cryptic failure modes; Dropwizard is way nicer in that regard than Spring Boot)
And, alongside C# for historical reasons, the closest we got in mainstream to the whole Xerox PARC ideals of what a developer workstation is supposed to be like.
The language has/had some rough edges that have been improved over the years, but the developer experience of using a strongly-typed, object-oriented language within a sturdy IDE like Idea is just second to none. The debugging process is so very straightforward. Java became synonymous with enterprisey bloated systems for good reason, but there is no pile of mud Java system that can't be stepped through cleanly with a debugger.
I'd also throw in what was possibly their greatest idea that sped adoption and that's javadoc. I'm not sure it was a 100% original idea, but having inline docs baked into the compiler and generating HTML documentation automatically was a real godsend for building libraries and making them usable very quickly. Strong typing also lined up nicely with making the documents hyper-linkable.
Java was really made to solve problems for large engineering teams moreso than a single developer at a keyboard.
Indeed. Many languages have something similar to Javadoc, yet somehow I haven't encountered anything quite as good as Javadoc, and I can't explain why or exactly how it's better. I admit I haven't tried that hard either. But I suspect it's down to the nature of the language and how, with well designed libraries at least (and not all are, certainly,) there is a nice decomposition of modules, packages, classes/interfaces and methods that leads to everything somehow having a correct place, and the Javadoc just follows. The strong typing is another contributor, where 90% of the time you can just look and the signature and imply what is intended. Finally, the old-fashioned frames based HTML typically used with Javadoc is a great benefit.
Also, I've found I experience less reluctance to author Javadoc for some reason. Again, part of this is due to strong types, and much of the legwork being correctly generated in nearly every case.
Lombok, when used with moderation, is wonderful. Mockito is magic, of a good kind. Maven still gets it done for me; I've yet to care about any problems Gradle purports to solve, and I think that's down to not creating the problems that Gradle is designed to paper over in the first place.
Today, if I had my choice of one thing I'd like to see in Java that doesn't presently exist it's Python's "yield". Yes, there are several ways to achieve this in Java. I want the totally frictionless generators of Python in Java.
I find these discussions have an interior split between the folks who are more concerned with getting the feature out now versus the folks who have had to keep a thousand ancient features running.
True, but it's also true that code spends 99% of it's lifetime in maintenance. That's the reason I am never impressed by tools that make it fast and easy to bootstrap.
I'm still trying to mentally grok the Clojure model and syntax hah. On my todo list. Clojure users seem to love it though. Do you have a tutorial that could sell it to me?
My Clojure AI book won't teach you the language, but afterward you read through a tutorial my book contains interesting examples; read it online https://leanpub.com/clojureai/read
Many Clojure tutorials are free! It's difficult to say without knowing nothing about your preference and experience. Everyone's welcome to join Clojure Slack community (for example) which has several tens of thousands of members and dedicated beginners channel. I'm sure if you asked there, you'd get tons of recommendations tailored to you. https://clojurians.slack.com/
(BTW Clojure, as a Lisp dialect, has almost no syntax. You can learn THAT in 5 minutes. The challenge is in training your programming mind to think in totally new concepts)
For syntax, maybe this general Lisp advice will help: consider a normal function call, like f(a, b). To make this into a Lisp function call, drop the unnecessary comma (whitespace is enough to separate tokens) like f(a b), and then move the function name inside the parentheses, like (f a b). Applying operators that are considered "primitive" in other languages are syntactically treated the same as functions. So imagine an add function like add(a, b), but instead of being named 'add', it's just named '+', like +(a, b). Applying the same transformation as before, this turns into (+ a b).
Using the function application syntax for primitives like + is nice because you get the same flexibility of normal functions, like variable argument length: (+ a b c).
Clojure is a little bit less uniform than other Lispy languages in that it has special brackets for lists (square brackets) and for maps (curly brackets), but that's pretty much it.
> Java performance isn't the fastest, that's ok, a close 3rd place behind C/CPP ain't bad. And you're still ahead of Go, and 10x or more ahead of Python and Ruby.
I’m slightly surprised there isn’t more KVM/Virtualized bare metal JVM environments. While I haven’t used it in a while, the decade+ I spent running Javan’s in production basically have the entire system over to the JVM (with some overhead for some small non-JVM background daemons). Most things were built not to use POSIX standards, but Java equivalents. Occasionally direct file system access for writes was necessary (logging being a big exception).
So giving the entire system to the JVM, performing some warmup prior to a service considering itself “healthy”, and the JVM was reasonably fast. It devoured memory and you couldn’t really do anything else with the host, but you got the Java ecosystem, for better or worse).
There was a lot of good tooling that is missing from other platforms, but also a ton of overhead that I am happy to not have to deal with at the moment.
Having a hard time finding it now, but someone put together a benchmark with two categories - naive and optimized - comparing implementations across languages with a workload vaguely resembling a real-world business application server with a mix of long and short lived objects. Java was at the top of the naive benchmark by a landslide and behind C and C++ (edit: and probably Rust) for the optimized ranking, but with a gap before the rest of the field.
With the JVM you basically outsource all the work you need to do in C/C++ to optimize memory management and a typical developer is going to have a hell of a time beating it for non-trivial, heterogenous workloads. The main disadvantage (at least as I understand) is the memory overhead that Java objects incur which prevent it from being fully optimized the way you can with C/C++.
> Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
The GC story is just great, however. Pretty much the best you can get in the entire ecosystem of managed-memory languages.
You have different GC algorithms implemented, and you can pick and tune the one that best fits your use-case.
The elephant in the room is of course ZGC, which has been delivering great improvements in lowering the Stop-the-world GC pauses. I've seen it consistently deliver sub-millisecond pauses whereas other algorithms would usually do 40-60 msec.
Needless to say, you can also write GC-free code, if you need that. It's not really advertised, but it's feasible.
> The elephant in the room is of course ZGC, which has been delivering great improvements in lowering the Stop-the-world GC pauses. I've seen it consistently deliver sub-millisecond pauses whereas other algorithms would usually do 40-60 msec.
As someone who's always been interested in gamedev, I genuinely wonder whether that would be good enough to implement cutting-edge combo modern acceleration structures/streaming systems (e.g. UE5's Nanite level-of-detail system.)
I have the ability to understand these modern systems abstractly, and I have the ability to write some high-intensity nearly stutter-free gamedev code that balances memory collection and allocation for predicable latency, but not both, at least without mistakes.
> As someone who's always been interested in gamedev, I genuinely wonder whether that would be good enough to implement cutting-edge combo modern acceleration structures/streaming systems (e.g. UE5's Nanite level-of-detail system.)
The GC would be the least of your problems.
Java is neat, but the memory model (on which the GC relies) and lack of operator overloading does mean that for games going for that level of performance would be incredibly tedious. You also have the warm up time, and the various hacks to get around that which exist.
Back when J2ME was a thing there was a mini industry of people cranking out games with no object allocation, everything in primitive arrays and so on. I knew of several studios with C and even C++ to Java translators because it was easier to write such code in a subset of those and automatically translate and optimize than it was to write the Java of the same thing by hand.
I'm honestly amazed people say this about Java, because the language almost couldn't be worse at giving you tools to use memory efficiently.
There's no value types (outside primitives) and everything is about pointer chasing. And surely if there was less pointer chasing it'd be easier to do the GC work at the same time.
When people talk about GC performance they’re not talking about using memory efficiently. They’re talking about how fast the GC can allocate and how long it will stop the world when it needs to collect. In both of these areas you won’t find GCs better than the ones provided by HotSpot. Even with what you mention, pointer chasing and lack of structs, they still outperform other implementations.
> Needless to say, you can also write GC-free code, if you need that. It's not really advertised, but it's feasible.
It is not feasible under the JVM type system. Even once Valhalla gets released it will carry restrictions that will keep that highly impractical.
It's much less needed with ZGC but even the poster child C# from the GC-based language family when it comes to writing allocation-free and zero-cost abstraction code presents challenges the second you need to use code written by someone who does not care as much about performance.
Zero-allocation (obviously different from zero GC) frameworks made a bit of a splash a little while back, but I'm not seeing much about them anymore from a brief search. I would have sworn that quarkus was one of them, but it looks like that's definitely not the case anymore.
The downside is that you sacrifice a lot of the benefits of guard rails of the language and tooling for what may not end up being much savings, depending on your workload.
> The downside is that you sacrifice a lot of the benefits of guard rails of the language and tooling for what may not end up being much savings, depending on your workload.
I think that's mostly done in organisation where there's time, budget and willingness to optimize as far as possible.
Sacrificing the guardrails doesn't make sense for the "general public" software but does tremendous sense in environment where latency is critical and the scale is massive. But then again, in those environments there are people handsomely paid to have a thorough understanding of the software and keep it working (making updates, implementing features etc).
I worked on a software that was written to be garbage-free whenever it could. Latency (server-side latency, i mean) in production (so real-world use case, not synthetic benchmark) was about 7-8 microseconds per request (p99.9) and STW garbage collection was at around 5msec (G1GC, p50, mostly young generation) or ~40 msec (p99.9, full gc) and was later lowered to ~800-900 microseconds with ZGC.
I know it might sound elitist but the real difference here are... Skill issues. Some people will just shun java down and yap about rewriting in rust or something like that, while some other people will benefit from the already existing Java ecosystem (and tooling) and optimize what they need to get the speed they're targeting.
I know I'll be downvoted by the rust evangelism task force, but meh.
Is data access in such a project purely from in-memory sources? I suspect most people shun it because database access- especially if the DB itself is over the network on a different machine- already slow enough that ZGC / zero allocation won't be noticed.
Java syntax isn't perfect, but it is consistent, and predictable. And hey, if you're using an Idea or Eclipse (and not notepad, atom, etc), it's just pressing control-space all day and you're fine.
Java memory management seems weird from a Unix Philosophy POV, till you understand whats happening. Again, not perfect, but a good tradeoff.
What do you get for all of these tradeoffs? Speed, memory safety. But with that you still still have dynamic invocation capabilities (making things like interception possible) and hotswap/live redefinition (things that C/CPP cannot do).
Perfect? No, but very practical for the real world use case.