What to Talk About Programming Languages?
Talking about programming languages is challenging when you’re “just” a user. The most substantive discussions about programming languages (PLs) typically require deep knowledge of PL theory, something most of us don’t have. However, once a language moves beyond academia and gains a user base in common programming domains, it becomes a product. And like any product, users have the right to discuss it without deep technical knowledge.
While we can discuss languages without PL theory knmowledge, we should do so thoughtfully, not just to respect language developers, but to better understand the nuances that benefit us as developers. Many features that start in niche languages eventually make their way into mainstream ones. By understanding the motivations of PL researchers and recognizing what makes certain features valuable, we can help shape better mainstream languages. In this article, I’ll explore what makes a language “good” and how to recognize good languages when we encounter them.
First, let’s clarify: when I say “language,” I’m referring strictly to the programming language itself, not its ecosystem. While ecosystems are crucial, they can change rapidly and shouldn’t be the primary criteria for evaluating a new language’s merit. If ecosystem were our main criterion, no new language could ever be “good.” Similarly, I won’t discuss obvious requirements like “having a good build system”—these are given.
We use languages to describe solutions, and in expressing these solutions, language features both help and constrain us. There’s an obvious trade-off here, but simply having many features and no constraints doesn’t automatically make a good language. In my opinion, we can evaluate languages across four key categories:
- Consistency: The language’s syntax and semantics should be predictable. Haskell exemplifies this well—even when reading unfamiliar Haskell code, you can often predict what it does. JavaScript, conversely, is notorious for its pitfalls and inconsistencies.
[] + [] // "" (empty string)
[] + {} // "[object Object]"
{} + [] // 0
{} + {} // NaN
const arr = [1, 2, 3];
arr.length = 1; // Array is now [1]
arr.length = 5; // Array is now [1, empty × 4]
These behaviors aren’t bugs—they’re deliberate design decisions resulting from JavaScript’s type coercion rules and object-to-primitive conversions. When adding arrays, they’re first converted to strings. When adding objects, the toString() method is called. The position of {} can change whether it’s interpreted as a block or an object literal.
While these rules are documented in the language specification, they create cognitive overhead for developers. Such implicit conversions make it harder to maintain and refactor large codebases. Consider trying to refactor a function that might receive either arrays or objects as arguments—you’d need to carefully handle all these edge cases.
Contrast this with Haskell, where changing a data structure (like converting from Vector to Set) is often straightforward thanks to static typing and explicit conversions. The compiler guides you through necessary changes, catching potential issues at compile time rather than runtime. This consistency makes refactoring more reliable and maintainable.
-
Error Prevention Before Runtime: The perfect language would prevent all programmer errors while imposing zero constraints—but we live in the real world. We must make trade-offs for safety. Take Rust’s borrow checker: whether this constraint is worth it depends entirely on your domain. Some constraints might be trivial in one context but critical in another. Personally, I believe writing type annotations for a good type checker is a worthwhile trade-off, though not everyone agrees (and I admittedly don’t like those who disagree).
-
Succinctness: While I love coding, code is ultimately a liability. Being able to do more with less code is valuable. Consider this example from Project Euler (finding the largest prime factor of 600851475143):
⊢⇌°/× 600851475143
Compare this with the Haskell version:
primes :: [Int]
primes = 2 : filter isPrime [3, 5..]
isPrime :: Int -> Bool
isPrime n = null $ tail $ primeFactors n
primeFactors :: Int -> [Int]
primeFactors n = factor n primes
where
factor :: Int -> [Int] -> [Int]
factor n (p:ps)
| p * p > n = [n]
| n `mod` p == 0 = p : factor (n `div` p) (p:ps)
| otherwise = factor n ps
largestPrimeFactor :: Int -> Int
largestPrimeFactor n = last $ primeFactors n
main:: IO ()
main = print $ largestPrimeFactor 600851475143
And the C++ implementation:
#include <iostream>
bool isPrime(long long num) {
if (num < 2)
return false;
for (long long i = 2; i <= std::sqrt(num); i++) {
if (num % i == 0)
return false;
}
return true;
}
long long largestPrimeFactor(long long num) {
long long largestFactor = 1;
for (long long i = 2; i <= std::sqrt(num); i++) {
while (num % i == 0) {
if (isPrime(i)) {
largestFactor = i;
}
num /= i;
}
}
if (num > 1 && isPrime(num)) {
largestFactor = num;
}
return largestFactor;
}
int main() {
long long num = 600851475143;
long long largestFactor = largestPrimeFactor(num);
std::cout << "The largest prime factor of " << num << " is: " << largestFactor
<< std::endl;
return 0;
}
The difference largely comes down to defining “what” versus “how” the distinction between declarative and imperative approaches. While declarative languages might seem superior, they face two main challenges: First, as you can see with the Uiua example, they can be incredibly hard to understand (I wrote that code months ago and can’t explain how it works now). Second, you have less control on declarative solutions when things are not going on happy path. Again, it’s all about trade-offs.
- Learning Curve: This is a function of all other factors, and it’s definitely not linear. Being declarative doesn’t automatically make a language easier or harder to learn. The real questions are: Does the difficulty pay off? Is the challenge coming from powerful features, or is the language just inconsistent with itself?
There’s no universal “sweet spot” that makes a perfect language. Different domains have different priorities. But like any product, programming languages have target users, and we need to think from their perspective. What problems is this language trying to solve? Are its solutions worthwhile? Not all PL research ideas are good ones. Dismissing a language without considering these trade-offs and domain needs is a form of low-key anti-intellectualism that hurts our field. Every significant language feature exists to solve real problems, even if those problems aren’t relevant to our specific domain. Rust’s borrow checker might seem unnecessary if you’re writing web applications, but it’s revolutionary for systems programming. Haskell’s type system might appear overly complex for scripting, but it enables remarkable guarantees for larger applications.
The key is to approach programming languages with curiosity and nuance. Instead of asking “Is this a good language?” we should ask “What problems does this language solve well?” This approach not only helps us make better technology choices but also contributes to the evolution of programming languages as a whole. After all, many features we take for granted today like garbage collection and type inference, started as “academic” features before becoming mainstream.