Rendered at 12:47:12 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
ryandrake 20 hours ago [-]
Almost all of Patrick's points are great if your software development goal is to make a buck. They don't seem to matter if you're writing open source, and I'd argue that desktop apps are still relevant and wonderful in the open source world. I just started a new hobby project, and am doing it as a cross-platform, non-Electron, desktop app because that's what I like to develop.
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
hn_acc1 16 hours ago [-]
To me, I prefer desktop apps because I KNOW when I've upgraded - it either said "upgrade now?" and did it, or, in the olden days, I had to track it down, or I installed an updated version of a distro, which included updated apps, so I expected some updates.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)
SoftTalker 15 hours ago [-]
You touched on the one thing I hate most about infrequently used websites. The inevitable popup to "explore our new features." Hell no, I don't want to do that. I haven't logged on in six months, so I'm obviously here now with a purpose in mind and I want to do that as quickly as possible and then close the tab.
gblargg 8 hours ago [-]
Might as well use another site if it's going to be all different since the last time. That's a big reason I use local apps, because I control when (and if) they get "upgraded" (and can roll back if I don't like them).
ThunderSizzle 2 hours ago [-]
Most apps that wrap websites like that will force me to update to be able to continue using it
emodendroket 7 hours ago [-]
> To me, I prefer desktop apps because I KNOW when I've upgraded - it either said "upgrade now?" and did it, or, in the olden days, I had to track it down, or I installed an updated version of a distro, which included updated apps, so I expected some updates.
Yeah, but as a maintainer it's the opposite, isn't it? I don't have to worry about supporting version current - 3 in the Polish version of Windows because you're always running the version I've deployed in the environment I've deployed it in (I mean, yes, I'm oversimplifying given the frontend component, but that's still a much smaller surface).
hn_acc1 16 hours ago [-]
Of course, I'm also an old-school hacker (typed my first BASIC program ~45 years ago), so I have a desktop mentality. None of this newfangled 17-pound-portable stuff for me :-) And phones are at best a tertiary computing mechanism: first, desktop, then laptop, then phone. So yes, I'm clearly biased. Not trying to hide that.
d3Xt3r 14 hours ago [-]
> For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser
I understand it was just an example, but you'd be surprised how far browsers have come along with technologies like Web Assembly and WebGL. Forget audio editing, you can even do video editing - without uploading any files to the remote server[1]. All the processing is done locally, within your browser.
And if you thought that was impressive, wait till you find out that you can can even boot the whole Linux kernel in your browser using a VM written in WASM[2]!
But I do agree with your points about lack of feature stability. I too prefer native apps just for the record (but for me, the main selling points are low RAM/CPU/disk requirements and keyboard friendliness).
Sure, but taking your video editor example, what advantages does an in-browser app provide over a native application like DaVinci Resolve, other than portability and not needing to install the application, in exchange for reduced performance, a clunkier interface, and reduced integration with the rest of the desktop platform?
And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
d3Xt3r 8 hours ago [-]
Portability and not having to install the app is a huge advantage. Especially on operating systems where there aren't any decent choices. Take Android for example, the Play Store is full of rubbish and adware-riddled apps, finding a decent app in there is like looking for a needle in a haystack. And whilst FDroid exists, most of the apps there are pretty basic in general, especially wrt this example (video editing).
Putting aside the video editing example for a bit, consider the photo editing web app Photopea, which is an excellent alternative to Adobe Photoshop. Linux is in urgent need of a Photoshop-like editor (and no, GIMP doesn't cut it), but Photopea does a decent enough job for many amateurs and even some pros. For a lot of these folks, Photoshop is one of the last things stopping them from switching to Linux, so apps like Photopea fill that gap. And guess what, Photopea works great on Android too.
Another use case is restricted environments where you can't easily find and install apps, eg immutable distros, or work computers. I use Photopea on my work PC quite regularly for light editing, because MS Paint sucks, and my role doesn't really justify going thru the hassle of getting the approvals to get an editor installed. So like it or not, web apps have their place.
graemep 3 minutes ago [-]
> Linux is in urgent need of a Photoshop-like editor (and no, GIMP doesn't cut it), but Photopea does a decent enough job for many amateurs and even some pros.
How is Photopea better than GIMP? How is it better than Krita?
cyberax 11 hours ago [-]
> Sure, but taking your video editor example, what advantages does an in-browser app provide over a native application like DaVinci Resolve
It's the issue of friction. Also, good webapps are often _better_ than native apps, as they can support tabs.
> And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Because it relied on Java and SWING, which were a disaster for desktop apps.
kalleboo 10 hours ago [-]
> Also, good webapps are often _better_ than native apps, as they can support tabs
This is a relatively new API, and the native apps that I use still don't support it properly. There are also things like middle-clicking to open things in new tabs, and being able to bookmark locations.
Aurornis 15 hours ago [-]
This blog post benefits a lot from understanding where the author was at that point in their career: They had gained notoriety for their writing about their Bingo Card Creator software, but were moving on. After this they went on to build Appointment Reminder, a webapp that grew to a nice MRR before being sold off. Both were nice little indie developer success stories.
I grew up reading his writings and learned pretty quickly to read them as "this is what I'm thinking right now in my life" even though they're written more as authoritative and decisive writings from an expert. Over time he's gone from SEO expert to $30K/week consulting expert to desktop app expert to indie SaaS expert to recruiting industry expert to working for Strip Atlas. It was fun to read his writings at each point, but after so many changes I realized it was better to read it as a blog of ongoing learnings and opinions, not necessarily as retrospective wisdom shared from years of experience on the topic even if that's what the writing style conveys.
So I agree that the advice in the post should be taken entirely in context of pursuing the specific goals he was pursuing at the time. The less your goals happen to align, the less relevant the advice becomes.
analog31 20 hours ago [-]
Going further, if you're a hobbyist, you're probably instinctively prioritizing the aspects of the hobby that you enjoy. My first app was a shareware offering in the 1980s, written in Turbo Pascal, that was easy to package and only had to run on one platform. Because expectations were low, my app looked just as good as commercial apps.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
Noumenon72 19 hours ago [-]
My coworker showed a Jupyter notebook with ipywidgets and it looked just like an app. A good CLI using FastAPI's `typer` looks a lot like an app too.
theK 19 hours ago [-]
I see a lot of this sentiment amongst developer friends but I never could relate. Its not that I'm against it or something but it just doesn't move me personally.
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
the__alchemist 18 hours ago [-]
They're also ubiquitous for creative works, i.e. the sort of things a small set of people spend much time on, but is not something most people use. Examples:
- CAD / ECAD
- Artist/photos
- Musician software. Composing, DAW etc
- Scientific software of all domains, drug design etc
leonidasrup 17 hours ago [-]
Adobe Photoshop, the most used tool for professional digital art, especially in raster graphics editing, is was first example of a perfectly fine commercial desktop application converted to cloud application with a single purpose - increased profit for Adobe.
rustcleaner 16 hours ago [-]
Master Collection CS6 still works excellently, and is now (relatively) small enough to live comfortably in virtuo. Newer file formats can be handled with ffmpeg and a bit of terminal-fu.
TacticalCoder 14 hours ago [-]
Slicers for people doing 3D printing too (don't know if webapp slicers are more common than desktop app slicers though).
Desktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
tmtvl 20 hours ago [-]
> Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
sevenzero 16 hours ago [-]
I quit all social media, cancelled Spotify and whatnot and I am hella thankful for the Strawberry media player as a desktop app as it allows me to play all the music i actually own. I love desktop apps.
itsgabriel 13 hours ago [-]
I generally agree. If you're not doing it for money you don't technically need most of these things. But if you see open source as more than “here's the code” some of them matter. Support will find you, via GitHub issues, emails, or DMs. Analytics is really important because it shows whether the software works for people besides you. Without money you usually do not have playtesters or a UX designer, so you get fewer useful bug reports. Frustrated users rarely take the time to write a detailed issue.
solatic 2 hours ago [-]
> only a concern if you're charging money
No, it's a concern if you care about impact. Improving commercial profits is one kind of impact that is relevant to for-profit corporations, but there is also impact like "improving user privacy" or "helping lower-income people manage their finances with a free-as-in-beer product". This impact can be measured and the feedback can be used to improve the product according to non-profit, non-commercial goals.
There are also people who build open-source software as a hobby and couldn't give two shits whether other people use it or not. More power to them. For those people, you are correct. https://book.iced.rs/philosophy.html comes to mind.
Then there are projects like Streisand (maybe a bad example, I see it has since been archived, but it came to mind) that want to change the world in some way. Those projects very much do need to care about metrics like, how many people are downloading the software, are people opening GitHub issues, are we obscure or is our target audience talking about us, hopefully positively but if not, how can we improve that? Value must always be worth the cost (even when the code is free, it must be worth the time to download, give it a try, give it CPU/RAM, maintain/upgrade the installation) - are we giving users value or are they churning?
It might blow your mind but even non-profits hire people with MBAs (and universities offer programs for MBAs that focus on non-profit management), precisely because some organizations focus on non-financial impact.
famouswaffles 15 hours ago [-]
>To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
If development ends at a git push and users are left to build/fend for themselves (granted this is a lot of open source), then yeah not much difference, but if you're building and packaging it up for users (which you will more likely to be doing if your project is an app specifically) then the difference is massive.
satvikpendem 18 hours ago [-]
Agreed, desktop frameworks have been getting really good these days, such as Flutter, Rust's GPUI (which the popular editor (and more importantly a competitor to a webview-based app in the form of Electron) Zed is written with), egui, Slint and so on, not to mention even the ability to render your desktop app to the web via WASM if you still wanted to share a link.
Times have changed quite a bit from nearly 20 years ago.
rustcleaner 16 hours ago [-]
I generally despise "web tech" as it is today. Browsers are not application platforms!
kelnos 11 hours ago [-]
You should probably accept the fact that browsers are indeed application platforms. I'm not saying they should be, or that they are good at that role, but they absolutely are, at this point in time.
nonethewiser 20 hours ago [-]
its just waaaaaay easier to distribute a web app
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
mohamedkoubaa 19 hours ago [-]
This points to our failure as an industry to design a universal app engine that isn't a browser.
fbrchps 19 hours ago [-]
Counterpoint: is the web browser not already fulfilling the "universal app engine" need? It can already run on most end user devices where people do most other things. IoT/Edge devices don't count here, but this day most of their data is just being sent back to a server which is accessible via some web interface.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
rustcleaner 16 hours ago [-]
>Counterpoint: is the web browser not already fulfilling the "universal app engine" need?
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
mohamedkoubaa 12 hours ago [-]
This was cathartic to read thank you
abdullahkhalids 17 hours ago [-]
Yes. But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language).
The impact on people's time, money and on the environment are proportional.
mpyne 15 hours ago [-]
> But it consumes at least 10x-100x more resources to run a web app than to run a comparable desktop app (written in a sufficiently low level language)
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
Pannoniae 13 hours ago [-]
Yes. I can run entire 3D games.... ten times in the memory footprint of your average browser. Even fairly decent-looking ones, not your Doom or Quake!
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
skydhash 12 hours ago [-]
I believe Firefox use separate processes per tab and most of them are over 100MB per page. And that's understandable when you know that each page is the equivalent of a game engine with it's own attached editor.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
archagon 11 hours ago [-]
I think a browser is an inverted universal engine. The underlying tech is solid, but on top of it sits the DOM and scripting, and then apps have to build on top of that mess. In my opinion, it would be much better for web apps and the DOM to be sibling implementations using the same engine, not hierarchically related. You wouldn’t use Excel as a foundation to make software, even though you could.
Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.
mohamedkoubaa 11 hours ago [-]
You are thinking along the same lines as me. The fact that the first thing to be standardized was HTML made it a fait accompli that everything had to be built on top of it, since that "guaranteed" <insert grain of salt> cross vendor compatibility.
There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)
Cheese48923846 18 hours ago [-]
Remember Flash? The big tech companies felt a threat to their walled gardens. They formed an unholy alliance to stamp out flash with a sprinkle of fake news labeling it a security threat.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
MrDrMcCoy 15 hours ago [-]
As I recall, Flash and Java weren't so much security issues themselves, but rather the poorly designed gaping hole they used to enter the browser sandbox being impossible to lock down. If something like WASM existed at the time to make it possible for them to run fully inside the sandbox, I bet they'd still be around today. People really did like Macromedia/Adobe tools for web dev, and the death of Flash was only possible to overcome its popularity because of just how bad those security holes were. I miss Flash, but I really don't miss drive-by toolbar and adware installation, which went away when those holes were closed.
tolciho 18 hours ago [-]
Flash had quite a lot of quite severe CVE; how many of those do you suppose are "fake news" connived by conspiracy (paranoid style in politics, much?) as opposed to Flash being a pile of rusted dongs as far as security goes? A lot of software from that era was a pile of rusted dongs, bloat browsers included. Flash was also the first broken website I ever came across, for some restaurant I never ended up going to. If they can't show their menu in text, oh, well.
jimbokun 18 hours ago [-]
We have failed to design a universal app engine…except for the one that dwarfs every other kind of software development for every kind of device in the world.
jcelerier 18 hours ago [-]
Can a single webpage address & use more than 4gb of ram nowadays? I was filling 16gb of ram with a single Ableton live session in 2011.
Gigachad 12 hours ago [-]
Via electron I’m sure it could. In the main browser it’s probably best to cap usage to avoid having buggy pages consume everything. Anything heavy like a video editor you’d rather install as an electron app for deeper system access and such.
rustcleaner 16 hours ago [-]
How about a webpage shouldn't ever address & use even 4GB of RAM! :O
theK 19 hours ago [-]
No. We did, it is the browser.
ryandrake 19 hours ago [-]
"The Browser" has turned out to be a pretty terrible application API, IMO. First, which browser? They are all (and have been) slightly different in infuriating ways going all the way back to IE6 and prior. Also, a lot of compromises were made while organically evolving what was supposed to be "a system for displaying and linking between text pages" into a cross-platform application and system API. The web's HTML/CSS roots are a heavy ball and chain for applications to carry around.
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
jstanley 18 hours ago [-]
They're not that different, and it's a pretty good platform and pretty easy to program for. That's why it won.
irishcoffee 18 hours ago [-]
It didn't win. It just survived long enough. The web is a terrible platform. I haven't ever shipped a line of "web code" for money and I plan to keep it that way until I retire. What a miserable way to make a living.
jstanley 18 hours ago [-]
Perhaps you're taking the npm/react/vercel world to be the entire web? I agree that that stuff is a scourge. But you can still just write html and Javascript and serve it from a static site, I wrote an outline in https://incoherency.co.uk/blog/stories/web-programs.html which I frequently link to coding agents when they are going astray.
mohamedkoubaa 12 hours ago [-]
I wouldn't say that react is what's wrong with the web. I would say that the web is what's wrong with react.
irishcoffee 16 hours ago [-]
When I was a kid I was running websites with active forums and a real domain name, and I did it with vBulletin and my brain. Someone bought the domain name and website off of me, haven't touched web tech since. I did use Wt at an old job once, but the "website" was local to 1 machine and there were no security concerns.
2ndorderthought 12 hours ago [-]
I envy your pure soul. I am one of many who has had, at times, been coerced through financial strain to write some front end code. All I ask for is, when the time comes, you try to remember me for who I was and not the thing I became.
Gigachad 12 hours ago [-]
Look at caniuse, if you see green boxes on all the current version browsers. Than you are good to go. If not, wait until the feature is more widely supported.
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
mohamedkoubaa 12 hours ago [-]
Platforms and app engines are orthogonal concerns. I agree that platform guidelines are worth preserving, and the web as a platform solves it by hijacking the rectangle that the native platform yields to it. Any app engine could do the same thing.
MrDrMcCoy 15 hours ago [-]
Please, for the love of all that is holy, not GTK.
rustcleaner 16 hours ago [-]
>or offers some competitive UX advantage (although this reason is shrinking all the time).
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
foresto 17 hours ago [-]
> its just waaaaaay easier to distribute a web app
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
2ndorderthought 12 hours ago [-]
No DNS, no DDOS, no network plane, no kubernetes, no required data egress, no cryptographic vulnerabilities, no surveillance of activity... It's almost like the push for everything to go through the web was like a psyop so everything we did and when was logged somewhere. No, no, that's not right.
zephen 18 hours ago [-]
Agreed.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
rustcleaner 16 hours ago [-]
God damn that drives me up a wall! Mozilla is a terrible offender in this regard, but there are myriad others too!
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
otterley 19 hours ago [-]
To be fair, probably most of us here on HN write software to put food on the table. Don’t pooh-pooh our careers.
sfpotter 18 hours ago [-]
He didn't pooh-pooh anyone's careers.
otterley 18 hours ago [-]
The way it's worded comes across that way.
foresto 17 hours ago [-]
I have spent a good deal of my life writing software to put food on the table. I didn't interpret any of what he wrote in the way you describe. Perhaps you could explain why you did.
kylemaxwell 15 hours ago [-]
Both can be true: we can have different preferences about what we're doing to put food on the table and what we're doing when we build something on our own for other reasons.
miki123211 18 hours ago [-]
Attitudes like these is why non-developers don't want to use open source software.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
satvikpendem 18 hours ago [-]
This presupposes that the OSS creator even wants users in the first place, which might not always be the case as it could be personal software; and that these users actually want these features, as many do not want analytics, ads, and A/B tests in your app.
janalsncm 18 hours ago [-]
I guess in the same way that one might presuppose a boat wants water?
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
notarobot123 16 hours ago [-]
Science research without obvious practical application can still be important and valuable.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
satvikpendem 16 hours ago [-]
It's more like building your own boat then someone else coming along and saying it'll never compete with a cruise ship because it doesn't have a water slide and endless buffet; sometimes, things in the same category can serve wholly different purposes.
2ndorderthought 12 hours ago [-]
If my user cannot install software in their own computer then I do not want their money. They have issues they need to work out on their own and they might be better off saving their money.
rustcleaner 16 hours ago [-]
>Attitudes like these is why non-developers don't want to use open source software.
Good! It's not for them! They can stay paypigs on subscription because they can't git gud!
MarcelOlsz 18 hours ago [-]
I'm a seasoned developer and I frequently come across OSS projects where I spend half an hour or more in "how the fuck do I actually use this"-land. A lot of developers need to take the mindset of writing the documentation for their non-tech grandma from the ground up.
kelnos 11 hours ago [-]
Or they can just, y'know, not do that. Because they don't owe you, or anyone, anything.
sharts 18 hours ago [-]
LLMs to the rescue
MarcelOlsz 18 hours ago [-]
It's the principle.
franga2000 20 hours ago [-]
Not off to a great start... The "look how many steps it takes to convert shareware users" is insanely overblown.
1-4. Google, find, read... this is the same for web apps.
2. Click download and wait a few seconds. Not enough time to give up because native apps are small. Heavy JS web apps might load for longer than that.
3. Click on the executable that the browser pops up in front of you. No closing the browser or looking for your downloads folder. It's right there!
3.5. You probably don't need an installer and it definitely doesn't need a multi-step wizard. Maybe a big "install" button with a smaller "advanced options".
3.6. Your installer (if you even have it) autostarts the program after finishing
4. The user uses it and is happy.
5. Some time later, the program prompts the user to pay, potentially taking them directly onto the payment form either in-app or by opening it in a browser.
6. They enter their details and pay.
That's one step more than a web app, but also a much bigger chance the user will come back to pay (you can literally send them a popup, you're a native app!).
monooso 20 hours ago [-]
If my failing memory serves, those were valid concerns in 2009, when this was written.
sevenzero 1 hours ago [-]
These are still valid concerns given that people become less and less tech savy with actual computers
neilv 20 hours ago [-]
> However, the existence of pirates is a stitch in my craw, particularly when any schoolmarm typing the name of my software into Google is prompted to try stealing it instead:
I wonder whether Google, in its Don't Be Evil era, ever considered what they should do about software piracy, and what they decided.
I'd guess they would've decided to either discourage piracy, or at least not encourage it.
In the screenshot, the Google search query doesn't say anything about wanting to pirate, yet Google is suggesting piracy, a la entrapment.
(Though other history about that user may suggest a software piracy tendency, but still, Google knows what piracy seeking looks like, and they special-case all sorts of other topics.)
Is the ethics practice to wait to be sued or told by a regulator to stop doing something?
Or maybe they anticipate costs and competition for how they operate, and lobby for the regulation they want, so all they have to do is be compliant with it, and be let off the hook for lawsuits?
rustcleaner 16 hours ago [-]
"Piracy" today is not stealing IP. It's not even what it used to mean, when it was originally used to describe rogue publishers who violated copyright. IP laws as used today against private downloaders and users are the legalization of plundering of people who do the equivalent of hear a fact/idea and act on it or use it. IP cannot be stolen, an "immunity from plundering" fee is what's being paid (license). The whole justification for it with software, namely copying from disc/internet to local storage, and then copying from local storage into RAM, is a legal formality to facilitate this plundering.
It is plundering those who didn't pay you for legal immunity.
hiAndrewQuinn 8 hours ago [-]
(Disclaimer: no special knowledge of Google, all below is solely my opinion and not to be considered as factual.)
Google's revenue model is and has always been web first. The more business happening on the web, the better it is for Google writ large, especially back when competing with Microsoft was a larger priority in that space.
It's much harder to pirate a web app, for obvious reasons, than a desktop app. Desktop apps being easy to pirate shifts professional software developers on the margin towards more web apps, which means more commercial activity centered on the web, which is is good for Google. So one could imagine pretty good business reasons to be at least blasé on the topic.
steve1977 20 hours ago [-]
Did Google ever have a real Don't be Evil era?
sowbug 19 hours ago [-]
The original expression came out of an internal company discussion that someone summarized (paraphrased) as "when there's a tough choice to make, one is usually less evil. Make that choice."
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
ux266478 19 hours ago [-]
Making a loud affair out its retirement rather than quietly letting it collect dust and be forgotten over time was most definitely not a good idea.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
sowbug 19 hours ago [-]
A similar dynamic is playing out with Anthropic, whose founders left OpenAI in part over a philosophical split that could be described, if you'll grant a little literary license appropriate to this thread, as Anthropic choosing the "don't be evil" path. No surprise that we now see HN commentary skewering Anthropic for not living up to it.
neilv 20 hours ago [-]
They had to at least nominally have it, early on, to be able to hire the best Internet-savvy people.
Tech industry culture today is pretty much finance bro culture, plus a couple decades of domain-specific conditioning for abuse.
But at the time Google started, even the newly-arrived gold rush people didn't think like that.
And the more experienced people often had been brought up in altruistic Internet culture: they wanted to bring the goodness to everyone, and were aware of some abuse threats by extrapolating from non-Internet society.
Minor49er 20 hours ago [-]
If you need to sloganize a reminder to yourself to not be evil, that's not a promising sign
kelnos 11 hours ago [-]
You have to understand the time period. Microsoft was huge and had won the browser wars, and had become a convicted monopolist.
Google's "don't be evil" was a way for them to say "we're regular Joes, just like you; we're not Microsoft, and we're not going to do bad stuff like they do".
neilv 19 hours ago [-]
Early in Google's history, I took that sentiment as saying that they were one of us (Internet people), and weren't going to act like Microsoft (at the time, regarded by Internet people as an underhanded and ignorant company). Even though Google had a very nice IR function and general cluefulness, and seemed destined to be big and powerful.
And if it were the altruistic Internet people they hired, the slogan/mantra could be seen as a reminder to check your ego/ambition/enthusiasm, as well as a shorthand for communicating when you were doing that, and that would be respected by everyone because it had been blessed from the top as a Prime Directive.
Today, if a tech company says they aspire not to be evil: (1) they almost certainly don't mean it, in the current culture and investment environment, or they wouldn't have gotten money from VCs (who invest in people motivated like themselves); (2) most of their hires won't believe it, except perhaps new grads who probably haven't thought much about it; and (3) nobody will follow through on it (e.g., witness how almost all OpenAI employees literally signed to enable the big-money finance-bro coup of supposedly a public interest non-profit).
traderj0e 18 hours ago [-]
I took it to mean, prioritize long-term growth over short-term income. But the slogan was silly even back then, like obviously an evil company would claim to not be evil.
neilv 17 hours ago [-]
If it was silly, a lot of altruistic people nevertheless fell for it.
For example, my impression at the time was that people thought that Google would be a responsible steward of Usenet archives:
FWIW, it absolutely was believable to me at the time that another Internet person would do a company consistent with what I saw as the dominant (pre-gold-rush) Internet culture.
For example of a personality familiar to more people on HN, one might have trusted that Aaron Swartz was being genuine, if he said he wanted to do a company that wouldn't be evil.
(I had actually proposed a similar corporate rule to a prospective co-founder, at a time when Google might've still been hosted at Stanford. Though the co-founder was new to Internet, and didn't have the same thinking.)
1718627440 19 hours ago [-]
In other words the company made a bet on peoples naivety and it worked.
fragmede 20 hours ago [-]
'99 to 2004. You had to have been there, maaaan...
steve1977 20 hours ago [-]
I've been there when Google was altavista.digital.com ;)
sudb 21 hours ago [-]
I wonder what the numbers say about desktop applications now, and how much the arrival of Electron changed things up here.
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
xp84 20 hours ago [-]
“Metrics”
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
drBonkers 18 hours ago [-]
Hey, I notice this kind of thing all the time. People use "data" to tell the story they want to -- similar to how it seems humans make a decision subconsciously then weave a rational decision to back it up afterwards.
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
mmarian 6 hours ago [-]
Review the methodology, if you can, and form your own conclusions. Don't bother trying to change people's minds. It rarely works, and often causes conflict, even in the case of people who say they're data-driven.
gridder 17 hours ago [-]
Survivorship bias
zephen 18 hours ago [-]
This stupidity might go a long way towards explaining the relentless push towards apps.
hermitcrab 21 hours ago [-]
Some of us are still making a living from desktop apps, 17 years later.
xantronix 17 hours ago [-]
Please tell your tales. We beseech thee of thine humble wisdom.
In 2026, the number of mobile applications in the App Store and Google Play increased by 60% year over year, largely because entry into the market has become much easier thanks to AI.
stackghost 20 hours ago [-]
Electron is the worst of both worlds. I have never paid for an Electron app, and never will. Horrid UX.
bigyabai 20 hours ago [-]
> I have never paid for an Electron app
Your employer most likely has.
stackghost 20 hours ago [-]
Sure, and so has my government. But I can only control what I personally pay for.
hermitcrab 21 hours ago [-]
What 'best metrics'?
joenot443 19 hours ago [-]
I think in this case it can be approximated as 'largest market'
I'd wager there are more people paying for software for their smart phone than any other platform they use.
bee_rider 17 hours ago [-]
Having my credit card already is an overwhelming advantage for the Apple App store and for Steam. I won’t say it is impossible to overcome, but I think I could count on my fingers the number of instances where I, like, typed my card into a website to buy anything, in the last decade.
hermitcrab 18 hours ago [-]
Yes, but they are mostly paying little or nothing. How much did you spend on phone apps this year? And ads pay a pittance, unless you have massive scale.
sudb 20 hours ago [-]
Anecdotally, conversion - from free to trial, trial to paid, one-off purchases, etc.
mwkaufma 20 hours ago [-]
Over a decade of circular "web apps are better for the subset of problems webapps are good at" tautologies.
traderj0e 17 hours ago [-]
Web apps weren't so easy to make back then, so standalone apps were the norm. Shortly before 2009 a lot of the web apps were Java or Adobe Flash, and 2009 was part of the transition period where platforms were at war with that stuff but open-web alternatives weren't mature yet.
mwkaufma 16 hours ago [-]
Yeah I know I was there -- also today's wasteful "open-web alternatives" wouldn't fly anyway, because I recall even during the XP era having min-specs of, like, 800mhz/512mb
traderj0e 15 hours ago [-]
Yeah, they were like "Flash is too slow" then the replacement was 5x slower
yen223 12 hours ago [-]
No one argued Flash was too slow, they argued (correctly) that Flash was closed source, proprietary, and had a lot of security issues
traderj0e 12 hours ago [-]
The most famous criticism of Flash was "Thoughts on Flash" by Steve Jobs, which said among other things that it's too inefficient. He did cite inconsistent hardware acceleration for H.264 that was a real performance drawback of Flash for video in particular, and was also complaining about the power usage for interactive Flash content in general. Jobs was right at the time from what I can tell, but somehow the end result was even slower stuff. People did keep repeating the line that Flash is slow.
I also remember people citing performance as a reason YouTube switched from Flash to HTML5. Searching those blogs now is giving a lot of 404s. Like I said this should've helped since it's video, but somehow YouTube immediately got slower anyway back then. Back then I installed an extension to force it to use QuickTime Player for that reason.
The proprietary and insecure parts were real problems too. I'm fine with the decisions that were made, but this was a drawback.
onionisafruit 20 hours ago [-]
This is from 2009, and the title should say so.
trueno 13 hours ago [-]
yeah perfect, i came in here to say
I'm done making web apps (2026).
seriously desktop apps kinda own i just desktop-app'd a pwa made it do SSO auth at my org and now its just part of the self-serve application download kiosk and we're laughing at all the pain we've endured for so many years writing up proposals and billing to scale up web app infra for internal tooling and stuff.
im kinda enjoying coming back to earth right now with my team and we're just hmmmmmmm'ing a lot of things like this. we've had devops chasing 23498234892% availability with k8s and load balancers and all this stuff and we're now assessing how much of that cruft was completely unnecessary and made everything some amorphous blob of complexity and unpredictable billing & and really gave devops a moat to just say "no" to so many things that came through the pipeline. there's so many things that can just be dragged back to like an actual on premise machine and served up through the internal network. we are... amused at how self-important we made ourselves out to be this past decade.
we're probably like days worth of goofing away from going to buy a few mac minis and plug it into some uninterruptable power supplies and just seeing how un-serious we can get with so much tooling we've built over the years. and for everything else, desktop apps. seriously desktop apps is like free infrastructure if you build it right.
autonomousErwin 20 hours ago [-]
Grass always looks greener on the other side, mainly because it's been fertilised.
binaryturtle 20 hours ago [-]
No, "grass always looks greener on the other side" is a perspective thing. If you stand on your own grass then you look down onto it and see the dirt, but if you look over to the other side you see the gras from the side which makes it look more dense and hides the dirt. But it's the same boring grass everywhere. :)
em-bee 16 hours ago [-]
hah, i have been arguing this for years. first time to see someone else making the same argument. nice!
nothrabannosir 19 hours ago [-]
I preferred GPs poop joke version but to each their own.
CobrastanJorji 17 hours ago [-]
At first, I thought "this is missing the point of the phrase" and moved on, but now I'm back to say it's stuck in my head and an intuitive, pretty neat way to think about it.
CMay 13 hours ago [-]
In practice this was about a product that is not targeted at desktop-savvy people. At the same time, you also have people who know "this isn't a hard problem, why do I need go through all this. let me go look for someone who did it better." Not to mention all of their younger tech savvy family telling them, "don't download anything!"
If your product targets a segment that expects a desktop app, do that. Web app, do that. Phone app, do that.
Something like this would have worked if it was still back in the Walmart bargain software shelf where people could impulse buy a CD, put it into their computer and have it automatically start and install, then show up on the desktop. Despite that being less common now, it was more streamlined in a way for many users.
Many of those people probably aren't logged into Steam or Windows Store either, so you have to do your own thing. It makes sense that web is the least friction for those people.
zeroc8 3 hours ago [-]
If he had used Delphi, the app would have been a single self contained exe.
No JVM updates to worry about.
HeyLaughingBoy 18 hours ago [-]
It's hard to believe not only that this is 17 years old, but that I remember when he posted it!
projektfu 18 hours ago [-]
Lol, it's less than 10 years old. The 80s were 20 years ago.
qq66 20 hours ago [-]
Nothing in this article is wrong, but worth noting that pre-AI, the companies that most significantly transformed the way we use our computers (Slack, Spotify, VS Code, etc.) did ship desktop apps.
xp84 20 hours ago [-]
“Desktop Apps”? I’d say pre-Electron, the ones that existed that far back shipped desktop apps, but for the past 10-15 years it’s all been Electron slop, which hardly qualify as “desktop apps” in my book.
If anything, it’s my very faint hope that AI would give companies - especially non-software companies - the bandwidth to release two real native apps instead of just 2 builds of a shitty Electron app. Fat chance though, I think, not least because companies love to use their “bRaNdInG” on everything - so the native look and feel a real app gives you “for free” is a downside for the clowns that do the visual design for most companies.
data-ottawa 20 hours ago [-]
For what it’s worth, I tried making a GTK4 app. I got started, created a window, created a header bar, then went to add a url/path entry widget and everything fell apart.
Entry suggestions/completions are formally deprecated with no replacement since 2022. When I did get them working on the deprecated API there was an empty completion option that would segfault if clicked. The default behaviour didn’t hide completions on window unfocus, so my completions would hover over any other open window. There was seemingly no way to disambiguate tab vs enter events… it just sucked.
So after adding one widget I abandoned the project. It felt like the early releases of SwiftUI where you could add a list view but then would run into weird issues as soon as you tried adding stuff to it.
Similarly trying to build an app for macOS practically depends on Swift by Sundell Hacking with Swift or others to make up for Apple’s lack of documentation in many areas. For years stuff like NSColor vs Color and similar API boundaries added friction, and the native macOS SwiftUI components just never felt normal while I tried making apps.
As heavy as web libraries and electron are, at least work mostly out of the box.
IshKebab 18 hours ago [-]
There are definitely a shortage of good GUI toolkits - making one is a huge undertaking. GTK is mediocre, as you discovered.
QtWidgets is extremely good though, even if it is effectively in maintenance mode.
Avalonia also seems good too though I haven't used it myself.
hn_acc1 16 hours ago [-]
I've used Qt off and on, and it's generally worked as advertised. Although when drawing very short lines on a canvas way back when (~2004), it wouldn't do a great job and I had to hack in custom routines that did a much better job.
For prototyping / one-offs, I've always enjoyed working in Tcl/Itcl and Tk/Itk - object oriented Tcl with a decent set of widgets. It's not going to set the world on fire, but it's pretty portable (should mostly work on every platform with minor changes), has a way to package up standalone executables, can ship many-files-as-one with an internal filesystem, etc..
Of course, I spent ~15 years in EDA, so it's much more comfortable than for most people, but it can easily be integrated into C/C++ as well with SWIG, etc.
robinsonb5 16 hours ago [-]
> I've always enjoyed working in Tcl/Itcl and Tk/Itk
In the near future I need to lash up a windows utility to generate a bunch of PDF files from a CSV (in concert with GhostScript), with specific filenames. I was trying to figure out the best approach and hadn't even considered Tcl and Tk - with Itcl you might have just given me a new rabbithole to explore! Thanks! (...I think!)
hn_acc1 15 hours ago [-]
I hope it works out! It's amazing how far Tcl/Tk has come since I "had" to use it as a wrapper around an X11 window back on an SGI Irix, using Tcl scripting to interface to an OpenGL backend. I think that was like 7.3.x or something in 1994. And it was pretty cool back then already! The team around Tcl is small, but dedicated and brilliant, IMHO.
hermitcrab 15 hours ago [-]
>Although when drawing very short lines on a canvas way back when (~2004), it wouldn't do a great job and I had to hack in custom routines that did a much better job.
QCanvas (or was it QGraphicsCanvas?) has long since been replace with QGraphicsScene, which is much more capable and doesn't suffer from pixelation issues.
hn_acc1 15 hours ago [-]
Probably. We paid thousands / year for the developer seat in our startup, and in the end, it wasn't great. I did manage to make the Tcl/Tk event loop and the Qt event loop work together, so we could have Tk windows inside a Qt app!
hermitcrab 15 hours ago [-]
There is a small business licence.
hermitcrab 18 hours ago [-]
Qt is still under very active development. Although there seems to be a lot more emphasis on QML than the widgets side of things for some time.
IshKebab 20 minutes ago [-]
Yes QtQuick/QML is under active development. QtWidgets is not really.
nozzlegear 17 hours ago [-]
> If anything, it’s my very faint hope that AI would give companies - especially non-software companies - the bandwidth to release two real native apps instead of just 2 builds of a shitty Electron app.
Anthropic has the resources of a fully armed and operational Claude Mythos (eyeroll), but they still choose to shit out an electron app on all of their users instead of going native like their competitors have done.
duped 20 hours ago [-]
All of those examples are web apps, two of them started on the web itself, and none of them transformed anything about how we used our computers (slack replaced a number of competitors, spotify is iTunes for the web, and VS code is a smaller jetbrains)
ksherlock 21 hours ago [-]
[2009]
Animats 17 hours ago [-]
From the article: "for the last three years I’ve sold Bingo Card Creator.
That's a job for a web page. It doesn't need to be installed.
rustcleaner 16 hours ago [-]
Never depend on anything you can't later privateer when the publisher decides to retroactively change the deal (or worse: when they become a subscription "service"). Minimize regular payment sinks. Owning your own infrastructure will always be cheaper in the long run; when you rent, you're paying the cost of ownership plus maintenance plus profit. Sure there may be an economy of scale, but what are you trading for that scale (loss of privacy, loss of sovereignty, loss of ownership, loss of control)?
Just. Don't. Subscribe.
Simple!
yshamrei 21 hours ago [-]
I would like to go back to 2009 =) The world was definitely simpler, and Bitcoin was cheaper =)
QuantumNomad_ 20 hours ago [-]
Please pick up a few bitcoins for me too when you go there
xp84 20 hours ago [-]
Realizing I could frickin mine enough bitcoins overnight back then to probably be set for life (maybe for multiple generations) now, is one of my biggest life regrets. I assume it’s shared with all other people who were into tech back then but dismissed bitcoin as stupid, as I did.
werdnapk 20 hours ago [-]
You simply can't get hung up on what could have been. Same applies to trying to time the stock market... should have bought, should have sold. Best thing is to know there's nothing that can be done about the past and move along and deal with what you can do now instead.
xp84 19 hours ago [-]
You're right. What gets me though is that unlike the stock market, bitcoin was an incredibly rare occurrence where anyone could have gotten extraordinarily rich without even incurring any risk! (besides a couple evenings spent learning how to use it.) Whereas to have $10MM today in GOOG stock, I would have had to invest over $300k in 2010.
ux266478 18 hours ago [-]
> without even incurring any risk!
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
CobrastanJorji 17 hours ago [-]
Every year since around 2014, friends and family would ask whether they should buy Bitcoin, and every year I told them that I had looked into Bitcoin, I fully understood what Bitcoin was and how it worked, and I recommended that they not invest in Bitcoin because it was stupid. And every year, my advice has been disastrously wrong. Who knows, maybe 2026 will be the first time I'm right.
AngryData 20 hours ago [-]
I put my compute in those days to help do some kind of protein folding simulation, definitely should of been bitcoin.
databasa 20 hours ago [-]
So true, no real SaaS, no heavy cloud infrastructures
rossant 20 hours ago [-]
I was curious why AI wasn't mentioned. Then I noticed the date: 2009.
wslh 20 hours ago [-]
And, I also I think many of the mobile and web apps will end up in prompting in the next few years.
Decabytes 16 hours ago [-]
Flutter has brought the joy of desktop gui programming back for me. Especially in this vibe coding era, building all those apps that only really have value to me has never been easier. And seeing the support for Flutter from Canonical has been nice. They’ve been helping make flutter on Linux better
arikrahman 13 hours ago [-]
I came to the same conclusion but due to the Web tooling being much better with Clojure. The Clojure toolchains to develop desktop apps like ClojureDart are too brittle from my experience.
righthand 20 hours ago [-]
I’m actually hopping on the desktop applications train. Though not for money. I just think the browser is becoming a surveillance plague of computing and we need MORE high quality desktop software not built on the invasive web stack to counter it.
taude 18 hours ago [-]
We should do more of this, hacker news, surface 17 year old articles and then debate like they were written yesterday!
recrush 21 hours ago [-]
which circle are we in?
josefritzishere 21 hours ago [-]
Condemned to useless labor, I beleive that's the 4th circle.
taude 18 hours ago [-]
feels more like 8, living in times of mass-fraud.
mattfrommars 18 hours ago [-]
I have lot and hate relationship with windows native desktop application. As a kid, I use to look for something GUI application and .exe application since they are breeze to run and just felt right.
Now in my day job, I just dislike developing for windows desktop application - partly probably the application is massive and super slow to develop or just there isn't a lot of investment from the company stand point into the product.
swyx 20 hours ago [-]
> Web Applications Convert Better
ok, now do this analysis for mobile apps...
ang_cire 21 hours ago [-]
> Why I'm Done Making Desktop Applications
To save you a click: It's harder to monetize desktop apps than webapps.
Lol. LMAO, even.
fph 20 hours ago [-]
Didn't HN have a "no clickbait titles" rule?
traderj0e 18 hours ago [-]
It's not clickbait though
whateveracct 20 hours ago [-]
it's amazing how freeing working an office job is. my personal projects don't have concerns such as monetization.
tonyedgecombe 20 hours ago [-]
On the other hand I spent 25 years selling desktop software and never once had an annual review. I never had to submit an application for time off. I never had to ask permission for a dentist appointment. If the weather was good I could take the day off and go for a bike ride. I didn’t attend any scrum meetings nor did I have to argue about what framework to use with a PM who couldn’t code FizzBuzz.
whateveracct 17 hours ago [-]
yeah but i get paid to use the toilet in my own home
ig remote work is the best of both worlds
dusted 17 hours ago [-]
Fuck. Web. Apps.
jrm4 20 hours ago [-]
Great, good riddance. Hopefully open source and/or AI push this person out of developing entirely.
People who focus this much on "conversion" et al are dinosaurs who deserve extinction.
monooso 19 hours ago [-]
First up, this article is 17 years old. There's no reason to assume the author has exactly the same opinions today.
More importantly, the author is talking about the realities of trying to earn a decent living shipping independent software. That requires paying customers.
It's perfectly reasonable to want to be paid for your work, and it certainly doesn't warrant the vitriol in your comment.
hermitcrab 18 hours ago [-]
Is a commercial software vendor not supposed to care how many sales they make?
traderj0e 18 hours ago [-]
Please tell me at least you don't work some software corp where it's someone else's job to worry about the business, if you're going to pass that kind of judgement.
Aurornis 18 hours ago [-]
The author (who is a frequent commenter here) started a company called Appointment Reminder after writing this, which for years was my favorite example of an independent small company that identified a niche, served it well, and then went on to be acquired.
The world has changed a lot since then. The days where 37 Signals could build an empire out of simple web form apps and individuals could build and sell a SaaS that sends reminder texts are long gone. Most of the low hanging fruit was mined out long ago and most simple services have seen 100 different startups try to serve them already.
As much as Appointment Reminder was my prime example of a successful indie SaaS, the author's second startup has (with all due respect) become one of my prime examples of not validating product-market fit before building your product. They went on to build Starfighter, a company that was supposed to be a candidate vetting platform where people could do complex coding challenges and then get matched up with companies wanting to hire people. It was built partially in the open through their newsletter and in Hacker News posts.
If you thought doing LeetCode problems to get interviews was annoying, imagine having to spend hours or days going through a CTF where you hack multi-core CPUs to do something complex with a simulated stock market. I can't even remember the entire premise, but every time I read something about the company it was getting more and more complex. At the same time I was on other forums where candidates were going the opposite direction: becoming frustrated with the proliferation of coding interviews and refusing to do interview challenges that would take hours of their time.
I remember through the entire process thinking that it seemed like a questionable business plan that wouldn't really appeal to companies or to candidates. Even the Hacker News comments were full of (surprisingly polite) feedback saying that investing a lot of hours into solving programming puzzles to maybe get some recruiter interest wasn't appealing - https://news.ycombinator.com/item?id=10480390
Some amazing foreshadowing in that thread from one of the co-founders (not Patrick McKenzie):
> I literally lack the ability to form coherent sentences about our business that don't somehow involve how to render a graph of AVR basic blocks in a React web app, is how little we're thinking about how the game interacts with recruiting right now.
> We are going to get the CTF right, and then work from there to a sustainable recruiting business. We should have done it the other way around, but we didn't. :)
As you might have guessed, it didn't work out at all. It was weird for me to follow one of my indie startup heroes on their journey into their second business that skipped all of the normal startup advice and then reached the exact conclusion that advice was warning against.
It was enlightening to follow along and I'm glad they tried something different and shared it along the way, but watching it happen was a turning point for me in how I approach advice from any one individual author, blogger, writer, or influencer.
hiAndrewQuinn 7 hours ago [-]
I think you'd appreciate some of the more philosophical thoughts behind folks like Robin Hanson or gwern, then. Or even father down that road, books like The Enigma of Reason.
The idea that previous business success only weakly predicts future business success, and that that correlation probably becomes even weaker as one tries things increasingly far from the perimeter, is one I believe in but can't really trace back to any concrete source, which suggests my worldview just dynamically generates it off the dome in response to this story. I probably have imbibing their arguments over a decade plus to thank for that.
I'm still a big fan of patio11 though. Starfighter is maybe best seen these days as watching a man be professionally slightly embarrassed, then dusting himself off and going on to do a bunch of cool stuff afterwards anyway, weak correlations be damned.
hermitcrab 18 hours ago [-]
The majority of software products don't work out.
Aurornis 17 hours ago [-]
Correct. That's why it's a mistake to build big, complex products before testing the business model.
shevy-java 18 hours ago [-]
Well ... 17 years ago.
"Over roughly the same period my day job has changed and transitioned me from writing thick clients in Swing to big freaking enterprise web apps."
I mean, the web kind of won. We just don't have a simple and useful way to
design for the web AND the desktop at the same time. I also use the www of
course, with a gazillion of useful CSS and JavaScript where I have to. I
have not entirely given up on the desktop world, but I abandoned ruby-gtk
and switched to ... jruby-swing. I know, I know, nobody uses swing anymore.
The point is not so much about using swing per se, but simply to have a GUI
that is functional on windows, with the same code base (I ultimately use the
same code base for everything on the backend anyway). I guess I would fully
transition into the world wide web too, but how can you access files on the
filesystem, create directories etc... without using node? JavaScript is deliberately restricted, node is pretty awful, ruby-wasm has no real documentation.
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)
Yeah, but as a maintainer it's the opposite, isn't it? I don't have to worry about supporting version current - 3 in the Polish version of Windows because you're always running the version I've deployed in the environment I've deployed it in (I mean, yes, I'm oversimplifying given the frontend component, but that's still a much smaller surface).
I understand it was just an example, but you'd be surprised how far browsers have come along with technologies like Web Assembly and WebGL. Forget audio editing, you can even do video editing - without uploading any files to the remote server[1]. All the processing is done locally, within your browser.
And if you thought that was impressive, wait till you find out that you can can even boot the whole Linux kernel in your browser using a VM written in WASM[2]!
But I do agree with your points about lack of feature stability. I too prefer native apps just for the record (but for me, the main selling points are low RAM/CPU/disk requirements and keyboard friendliness).
[1] https://news.ycombinator.com/item?id=47847558
[2] https://joelseverin.github.io/linux-wasm/
https://pikimov.com/
And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Putting aside the video editing example for a bit, consider the photo editing web app Photopea, which is an excellent alternative to Adobe Photoshop. Linux is in urgent need of a Photoshop-like editor (and no, GIMP doesn't cut it), but Photopea does a decent enough job for many amateurs and even some pros. For a lot of these folks, Photoshop is one of the last things stopping them from switching to Linux, so apps like Photopea fill that gap. And guess what, Photopea works great on Android too.
Another use case is restricted environments where you can't easily find and install apps, eg immutable distros, or work computers. I use Photopea on my work PC quite regularly for light editing, because MS Paint sucks, and my role doesn't really justify going thru the hassle of getting the approvals to get an editor installed. So like it or not, web apps have their place.
How is Photopea better than GIMP? How is it better than Krita?
It's the issue of friction. Also, good webapps are often _better_ than native apps, as they can support tabs.
> And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Because it relied on Java and SWING, which were a disaster for desktop apps.
All the native apps I use support tabs, its a basic feature of the macOS windowing APIs https://developer.apple.com/documentation/appkit/nswindowtab...
I grew up reading his writings and learned pretty quickly to read them as "this is what I'm thinking right now in my life" even though they're written more as authoritative and decisive writings from an expert. Over time he's gone from SEO expert to $30K/week consulting expert to desktop app expert to indie SaaS expert to recruiting industry expert to working for Strip Atlas. It was fun to read his writings at each point, but after so many changes I realized it was better to read it as a blog of ongoing learnings and opinions, not necessarily as retrospective wisdom shared from years of experience on the topic even if that's what the writing style conveys.
So I agree that the advice in the post should be taken entirely in context of pursuing the specific goals he was pursuing at the time. The less your goals happen to align, the less relevant the advice becomes.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
Desktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
No, it's a concern if you care about impact. Improving commercial profits is one kind of impact that is relevant to for-profit corporations, but there is also impact like "improving user privacy" or "helping lower-income people manage their finances with a free-as-in-beer product". This impact can be measured and the feedback can be used to improve the product according to non-profit, non-commercial goals.
There are also people who build open-source software as a hobby and couldn't give two shits whether other people use it or not. More power to them. For those people, you are correct. https://book.iced.rs/philosophy.html comes to mind.
Then there are projects like Streisand (maybe a bad example, I see it has since been archived, but it came to mind) that want to change the world in some way. Those projects very much do need to care about metrics like, how many people are downloading the software, are people opening GitHub issues, are we obscure or is our target audience talking about us, hopefully positively but if not, how can we improve that? Value must always be worth the cost (even when the code is free, it must be worth the time to download, give it a try, give it CPU/RAM, maintain/upgrade the installation) - are we giving users value or are they churning?
It might blow your mind but even non-profits hire people with MBAs (and universities offer programs for MBAs that focus on non-profit management), precisely because some organizations focus on non-financial impact.
If development ends at a git push and users are left to build/fend for themselves (granted this is a lot of open source), then yeah not much difference, but if you're building and packaging it up for users (which you will more likely to be doing if your project is an app specifically) then the difference is massive.
Times have changed quite a bit from nearly 20 years ago.
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
The impact on people's time, money and on the environment are proportional.
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.
There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
You've reminded me of the XKCD comic about standards: https://xkcd.com/927/
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
Good! It's not for them! They can stay paypigs on subscription because they can't git gud!
1-4. Google, find, read... this is the same for web apps. 2. Click download and wait a few seconds. Not enough time to give up because native apps are small. Heavy JS web apps might load for longer than that. 3. Click on the executable that the browser pops up in front of you. No closing the browser or looking for your downloads folder. It's right there! 3.5. You probably don't need an installer and it definitely doesn't need a multi-step wizard. Maybe a big "install" button with a smaller "advanced options". 3.6. Your installer (if you even have it) autostarts the program after finishing 4. The user uses it and is happy. 5. Some time later, the program prompts the user to pay, potentially taking them directly onto the payment form either in-app or by opening it in a browser. 6. They enter their details and pay.
That's one step more than a web app, but also a much bigger chance the user will come back to pay (you can literally send them a popup, you're a native app!).
I wonder whether Google, in its Don't Be Evil era, ever considered what they should do about software piracy, and what they decided.
I'd guess they would've decided to either discourage piracy, or at least not encourage it.
In the screenshot, the Google search query doesn't say anything about wanting to pirate, yet Google is suggesting piracy, a la entrapment.
(Though other history about that user may suggest a software piracy tendency, but still, Google knows what piracy seeking looks like, and they special-case all sorts of other topics.)
Is the ethics practice to wait to be sued or told by a regulator to stop doing something?
Or maybe they anticipate costs and competition for how they operate, and lobby for the regulation they want, so all they have to do is be compliant with it, and be let off the hook for lawsuits?
It is plundering those who didn't pay you for legal immunity.
Google's revenue model is and has always been web first. The more business happening on the web, the better it is for Google writ large, especially back when competing with Microsoft was a larger priority in that space.
It's much harder to pirate a web app, for obvious reasons, than a desktop app. Desktop apps being easy to pirate shifts professional software developers on the margin towards more web apps, which means more commercial activity centered on the web, which is is good for Google. So one could imagine pretty good business reasons to be at least blasé on the topic.
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
Tech industry culture today is pretty much finance bro culture, plus a couple decades of domain-specific conditioning for abuse.
But at the time Google started, even the newly-arrived gold rush people didn't think like that.
And the more experienced people often had been brought up in altruistic Internet culture: they wanted to bring the goodness to everyone, and were aware of some abuse threats by extrapolating from non-Internet society.
Google's "don't be evil" was a way for them to say "we're regular Joes, just like you; we're not Microsoft, and we're not going to do bad stuff like they do".
And if it were the altruistic Internet people they hired, the slogan/mantra could be seen as a reminder to check your ego/ambition/enthusiasm, as well as a shorthand for communicating when you were doing that, and that would be respected by everyone because it had been blessed from the top as a Prime Directive.
Today, if a tech company says they aspire not to be evil: (1) they almost certainly don't mean it, in the current culture and investment environment, or they wouldn't have gotten money from VCs (who invest in people motivated like themselves); (2) most of their hires won't believe it, except perhaps new grads who probably haven't thought much about it; and (3) nobody will follow through on it (e.g., witness how almost all OpenAI employees literally signed to enable the big-money finance-bro coup of supposedly a public interest non-profit).
For example, my impression at the time was that people thought that Google would be a responsible steward of Usenet archives:
https://en.wikipedia.org/wiki/Henry_Spencer#Preserving_Usene...
FWIW, it absolutely was believable to me at the time that another Internet person would do a company consistent with what I saw as the dominant (pre-gold-rush) Internet culture.
For example of a personality familiar to more people on HN, one might have trusted that Aaron Swartz was being genuine, if he said he wanted to do a company that wouldn't be evil.
(I had actually proposed a similar corporate rule to a prospective co-founder, at a time when Google might've still been hosted at Stanford. Though the co-founder was new to Internet, and didn't have the same thinking.)
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
https://www.successfulsoftware.net
Your employer most likely has.
I'd wager there are more people paying for software for their smart phone than any other platform they use.
I also remember people citing performance as a reason YouTube switched from Flash to HTML5. Searching those blogs now is giving a lot of 404s. Like I said this should've helped since it's video, but somehow YouTube immediately got slower anyway back then. Back then I installed an extension to force it to use QuickTime Player for that reason.
The proprietary and insecure parts were real problems too. I'm fine with the decisions that were made, but this was a drawback.
I'm done making web apps (2026).
seriously desktop apps kinda own i just desktop-app'd a pwa made it do SSO auth at my org and now its just part of the self-serve application download kiosk and we're laughing at all the pain we've endured for so many years writing up proposals and billing to scale up web app infra for internal tooling and stuff.
im kinda enjoying coming back to earth right now with my team and we're just hmmmmmmm'ing a lot of things like this. we've had devops chasing 23498234892% availability with k8s and load balancers and all this stuff and we're now assessing how much of that cruft was completely unnecessary and made everything some amorphous blob of complexity and unpredictable billing & and really gave devops a moat to just say "no" to so many things that came through the pipeline. there's so many things that can just be dragged back to like an actual on premise machine and served up through the internal network. we are... amused at how self-important we made ourselves out to be this past decade.
we're probably like days worth of goofing away from going to buy a few mac minis and plug it into some uninterruptable power supplies and just seeing how un-serious we can get with so much tooling we've built over the years. and for everything else, desktop apps. seriously desktop apps is like free infrastructure if you build it right.
If your product targets a segment that expects a desktop app, do that. Web app, do that. Phone app, do that.
Something like this would have worked if it was still back in the Walmart bargain software shelf where people could impulse buy a CD, put it into their computer and have it automatically start and install, then show up on the desktop. Despite that being less common now, it was more streamlined in a way for many users.
Many of those people probably aren't logged into Steam or Windows Store either, so you have to do your own thing. It makes sense that web is the least friction for those people.
If anything, it’s my very faint hope that AI would give companies - especially non-software companies - the bandwidth to release two real native apps instead of just 2 builds of a shitty Electron app. Fat chance though, I think, not least because companies love to use their “bRaNdInG” on everything - so the native look and feel a real app gives you “for free” is a downside for the clowns that do the visual design for most companies.
Entry suggestions/completions are formally deprecated with no replacement since 2022. When I did get them working on the deprecated API there was an empty completion option that would segfault if clicked. The default behaviour didn’t hide completions on window unfocus, so my completions would hover over any other open window. There was seemingly no way to disambiguate tab vs enter events… it just sucked.
So after adding one widget I abandoned the project. It felt like the early releases of SwiftUI where you could add a list view but then would run into weird issues as soon as you tried adding stuff to it.
Similarly trying to build an app for macOS practically depends on Swift by Sundell Hacking with Swift or others to make up for Apple’s lack of documentation in many areas. For years stuff like NSColor vs Color and similar API boundaries added friction, and the native macOS SwiftUI components just never felt normal while I tried making apps.
As heavy as web libraries and electron are, at least work mostly out of the box.
QtWidgets is extremely good though, even if it is effectively in maintenance mode.
Avalonia also seems good too though I haven't used it myself.
For prototyping / one-offs, I've always enjoyed working in Tcl/Itcl and Tk/Itk - object oriented Tcl with a decent set of widgets. It's not going to set the world on fire, but it's pretty portable (should mostly work on every platform with minor changes), has a way to package up standalone executables, can ship many-files-as-one with an internal filesystem, etc..
Of course, I spent ~15 years in EDA, so it's much more comfortable than for most people, but it can easily be integrated into C/C++ as well with SWIG, etc.
In the near future I need to lash up a windows utility to generate a bunch of PDF files from a CSV (in concert with GhostScript), with specific filenames. I was trying to figure out the best approach and hadn't even considered Tcl and Tk - with Itcl you might have just given me a new rabbithole to explore! Thanks! (...I think!)
QCanvas (or was it QGraphicsCanvas?) has long since been replace with QGraphicsScene, which is much more capable and doesn't suffer from pixelation issues.
Anthropic has the resources of a fully armed and operational Claude Mythos (eyeroll), but they still choose to shit out an electron app on all of their users instead of going native like their competitors have done.
That's a job for a web page. It doesn't need to be installed.
Just. Don't. Subscribe.
Simple!
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
ok, now do this analysis for mobile apps...
To save you a click: It's harder to monetize desktop apps than webapps.
Lol. LMAO, even.
ig remote work is the best of both worlds
People who focus this much on "conversion" et al are dinosaurs who deserve extinction.
More importantly, the author is talking about the realities of trying to earn a decent living shipping independent software. That requires paying customers.
It's perfectly reasonable to want to be paid for your work, and it certainly doesn't warrant the vitriol in your comment.
There's an interview with him on the subject that is sadly behind a paywall now: https://www.indiehackers.com/post/how-i-grew-my-appointment-...
The world has changed a lot since then. The days where 37 Signals could build an empire out of simple web form apps and individuals could build and sell a SaaS that sends reminder texts are long gone. Most of the low hanging fruit was mined out long ago and most simple services have seen 100 different startups try to serve them already.
As much as Appointment Reminder was my prime example of a successful indie SaaS, the author's second startup has (with all due respect) become one of my prime examples of not validating product-market fit before building your product. They went on to build Starfighter, a company that was supposed to be a candidate vetting platform where people could do complex coding challenges and then get matched up with companies wanting to hire people. It was built partially in the open through their newsletter and in Hacker News posts.
If you thought doing LeetCode problems to get interviews was annoying, imagine having to spend hours or days going through a CTF where you hack multi-core CPUs to do something complex with a simulated stock market. I can't even remember the entire premise, but every time I read something about the company it was getting more and more complex. At the same time I was on other forums where candidates were going the opposite direction: becoming frustrated with the proliferation of coding interviews and refusing to do interview challenges that would take hours of their time.
I remember through the entire process thinking that it seemed like a questionable business plan that wouldn't really appeal to companies or to candidates. Even the Hacker News comments were full of (surprisingly polite) feedback saying that investing a lot of hours into solving programming puzzles to maybe get some recruiter interest wasn't appealing - https://news.ycombinator.com/item?id=10480390
Some amazing foreshadowing in that thread from one of the co-founders (not Patrick McKenzie):
> I literally lack the ability to form coherent sentences about our business that don't somehow involve how to render a graph of AVR basic blocks in a React web app, is how little we're thinking about how the game interacts with recruiting right now.
> We are going to get the CTF right, and then work from there to a sustainable recruiting business. We should have done it the other way around, but we didn't. :)
As you might have guessed, it didn't work out at all. It was weird for me to follow one of my indie startup heroes on their journey into their second business that skipped all of the normal startup advice and then reached the exact conclusion that advice was warning against.
It was enlightening to follow along and I'm glad they tried something different and shared it along the way, but watching it happen was a turning point for me in how I approach advice from any one individual author, blogger, writer, or influencer.
The idea that previous business success only weakly predicts future business success, and that that correlation probably becomes even weaker as one tries things increasingly far from the perimeter, is one I believe in but can't really trace back to any concrete source, which suggests my worldview just dynamically generates it off the dome in response to this story. I probably have imbibing their arguments over a decade plus to thank for that.
I'm still a big fan of patio11 though. Starfighter is maybe best seen these days as watching a man be professionally slightly embarrassed, then dusting himself off and going on to do a bunch of cool stuff afterwards anyway, weak correlations be damned.
"Over roughly the same period my day job has changed and transitioned me from writing thick clients in Swing to big freaking enterprise web apps."
I mean, the web kind of won. We just don't have a simple and useful way to design for the web AND the desktop at the same time. I also use the www of course, with a gazillion of useful CSS and JavaScript where I have to. I have not entirely given up on the desktop world, but I abandoned ruby-gtk and switched to ... jruby-swing. I know, I know, nobody uses swing anymore. The point is not so much about using swing per se, but simply to have a GUI that is functional on windows, with the same code base (I ultimately use the same code base for everything on the backend anyway). I guess I would fully transition into the world wide web too, but how can you access files on the filesystem, create directories etc... without using node? JavaScript is deliberately restricted, node is pretty awful, ruby-wasm has no real documentation.