Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Before WSL, the best ways to run unmodified Linux binaries inside Windows were CoLinux and flinux.

http://www.colinux.org/

https://github.com/wishstudio/flinux

flinux essentially had the architecture of WSL1, while CoLinux was more like WSL2 with a Linux kernel side-loaded.

Cygwin was technically the correct approach: native POSIX binaries on Windows rather than hacking in some foreign Linux plumbing. Since it was merely a lightweight DLL to link to (or a bunch of them), it also kept the cruft low without messing with ring 0.

However, it lacked the convenience of a CLI package manager back then, and I remember being hooked on CoLinux when I had to work on Windows.

 help



Cygwin is way older than CoLinux. CoLinux is from 2004. Cygwin was first released in 1995.

The problem with Cygwin as I remember it was DLL hell. You'd have applications (such as a OpenSSH port for Windows) which would include their own cygwin1.dll and then you'd have issues with different versions of said DLL.

Cygwin had less overhead which mattered in a world of limited RAM and heavy, limited swapping (x86-32, limited I/O, PATA, ...).

Those constraints also meant native applications instead of Web 2.0 NodeJS and what not. Java specifically had a bad name, and back then not even a coherent UI toolkit.

As always: two steps forward, one step back.


Just use ssh from Cygwin. DLL hell was rarely a problem, just always install everything via setup.exe.

The single biggest problem it has is slow forking. I learned to write my scripts in pure bash as much as possible, or as a composition of streaming executables, and avoid executing an executable per line of input or similar.


On your own system, sure.

As a dependency of a shipping Windows application that needs to cleanly coexist side-by-side with existing Cygwin installations and optionally support silent install/upgrade/uninstall through mechanisms like SCCM, Intune, and Group Policy?

Not so much.

I do use the setup program to build the self-contained Cygwin root that's ultimately bundled into my program's MSI package and installed as a subdirectory of its Program Files directory, however.


Slow forking is only the second biggest problem IMO. The biggest is the lack of proper signals. There's a bunch of software out there that just isn't architected to work well without non-cooperative preemption.

Huh? Signals have worked fine for a long time under Cygwin.

That's fake cooperative emulation of signals. It isn't preemptive (unless someone got a kernel driver approved while I wasn't looking?) thus many things either work poorly or not at all. Pause-the-world GC algorithms are a good example. Coroutine implementations also have to be cooperative.

If you're curious, I believe the issue was discussed at length in the Go GitHub issues years ago. Also on the mailing lists of many other languages.


I've never had a problem installing from setup, but some tools were (maybe still are, it is a long time since I've needed anything not in the main repo) ported to windows using the cygwin dlls were distributed with their own versions and could clobber the versions you have otherwise (and have their versions clobbered when you fix that).

> slow forking

There isn't much that can be done about that: starting up and tearing down a process on Windows is much more resource intensive operation than most other OSs because there is a lot going on by default that on other OSs a process ops into, only if it needs to, by interacting with GUI libraries and such. This is why threads were much more popular on Windows: while they are faster than forking on other OSs too, especially of course if data needs to be shared between the tasks because IPC is a lot more expensive than just sharing in-process memory, the difference is not as stark as seen under Windows so the potential difficulties of threaded development wasn't always worth the effort.

Cygwin can't do anything about the cost of forking processes, unfortunately.


Try using the Windows busybox port of "Bash":

https://frippery.org/busybox/index.html

It has a subset of bash implemented on Ash/Dash. Arrays are not supported, but it is quite fast.

The forking problem is still present, though.


Cygwin bash isn't slow either. The problem is a typical bash script isn't a series of bash operations, it's a series of command line program executions.

For example, someone might do something like this (completely ignoring the need to quote in the interests of illustrating the actual issue, forking):

    for x in *; do
      new_name=$(echo $x | sed 's/old/new/')
      mv $x $new_name
    done
Instead of something like this:

    for x in *; do
      echo $x
    done | sed -r 's|(.*)old(.*)|mv \1old\2 \1new\2|' | grep '^mv ' | bash
This avoids a sed invocation per loop and eliminates self-renames, but it's harder to work with.

Of course the code as written is completely unusuable in the presence of spaces or other weird characters in filenames, do not use this.


You could also use the inbuilt substitution mechanism:

    $ parameter='fisholdbits'
    $ echo ${parameter/old/new}
    fishnewbits

No, seriously, give an ash-derivative a try.

Dash has been benchmarked as 4x faster than bash. The bash manpage ends by stating that "bash is too big, and too slow."


> No, seriously, give an ash-derivative a try.

To solve the problem or because you saw "slow" and "bash" and wanted to bring up something cool but unrelated?

If I go from 10 seconds of forking and .04 seconds of shell to 10 seconds of forking and .01 seconds of shell, I don't actually care about how cool and fast the shell is. And I've never had the speed of bash itself be a problem.


No, because the Ada gsh also proved that the POSIX shell syntax could perform far better.

Bash is prominent in announcing that it is "too big and too slow." It has said this for years. Why are its supporters so firmly in denial?


Your focus here, with a whole comment just quoting the line, makes it sound like you're motivated by smugness, not an actual attempt to help.

Optimizations are cool and all, but being 4x faster at something that's already taking negligible time is not something that makes a big difference.

How long ago was that line written anyway? It feels like complaining about emacs using its "8 megabytes" of memory in the modern day.

The solution to this problem is to reduce the number of forks. Faster code on either side is aesthetically nice but unhelpful.

It's not "denial" to acknowledge that an already-fast program has a faster replacement and then get back to working on the bottlenecks.


I actually need a fast shell.

So did Debian and Ubuntu, so they demoted bash.

Whatever smugness you are interpreting here is diametrically opposed to the facts of this situation, and repeat after me:

It’s too big and too slow.


> diametrically opposed to the facts of this situation

What do you think 'this situation' is?

Repeat after me: The situation is that fork is using almost all the runtime.

The only solutions are fixing fork or forking less. Your suggestions are not related to the actual problem the user is having.


  $ rpm -q bash
  bash-5.1.8-9.el9.x86_64

  $ man bash | sed -n '/BUGS/,/^$/p'
  BUGS
       It's too big and too slow.

> Java specifically had a bad name, and back then not even a coherent UI toolkit.

Java was ahead of its time, now nothing has a coherent UI toolkit.


Qt looks nice as a user and gnome gtk isn’t too bad either

Wx isn’t bad either. https://wxwidgets.org/

You don’t get an app that looks the same across platforms. You do get apps that look like they belong on your platform, even though the code is cross-platform. It uses the native toolkit no matter where you run it across Windows, GTK, Qt, Motif, macOS/Carbon, macOS/Cocoa, and X11 with generic widgets.

Older platforms are also supported, like OS/2, Irix, and OSF/1.

https://wiki.wxwidgets.org/Supported_Platforms

It’s a C++ project, but it has bindings for most of the languages you’d use to build an application. Ada? Go? Delphi? Ruby? Python? Rust? Yes, and more. https://wiki.wxwidgets.org/Bindings


> [Wx] uses the native toolkit no matter where you run it

This is false. https://news.ycombinator.com/item?id=24250968 https://news.ycombinator.com/item?id=24259040 It was false in 2020 and it is still false today (I just checked).

I wish the Wx proponents would stop saying these things. Who exactly are you trying to fool? Do you have no concept of reputational damage? What good comes from a claim that is so easily disproven by just installing a Wx application and looking?


Do you understand the difference between a toolkit API and a graphical widget?

I’m not trying to fool anyone. I'm not affiliated with the project. I’m just aware of it and have used it a few times. You, on the other hand, have called me a liar and a fraud because I repeated exactly what the project docs state and which your two links do nothing to contradict. In fact, you linked to yourself being corrected by the actual maintainer of the project. Did you read anything he wrote?


> Do you understand the difference between a toolkit API and a graphical widget?

I think I do. I have taken a few minutes on the Web to compare that what I had in mind is correct. What was the point of asking this question? Was it to trap me in a gotcha, or paint me as clueless, or what?

> have called me a liar and a fraud because I repeated exactly what the project docs state

Good, you realise you are taking on the claims made by Wx on paper. However, there's more to the world. To get the full picture, you have to also engage with what I have listed. The docs say one thing, the reality shown in the screenshots say another. There is a contradiction. It remains unresolved, not for lack of trying on my part.

> your two links do nothing to contradict

You are not further allowed by me to invalidate what I was writing about by simply disregarding the evidence. Engage with the points I was making. The differences in look and feel between Wx and native are plain for everyone to see and verify. So, what now? Who is right?

> Did you read anything he wrote?

Yes. Examine this:

his claim> OTOH all the standard UI elements (buttons, checkboxes, text controls, date pickers, ...) are native

my counter-evidence> Well, let's verify that… https://i.imgur.com/uHfjoUs.png No, they're not.

his deflection> Sorry, I don't know what is this supposed to prove

So instead of admitting that there is a contradiction, he just pretends to not understand it.

Also examine this:

> look good

> look good

> looks fine

> look good

I never mentioned anything about looking good, this is a distraction designed to deflect from the central point I was making. As I wrote before, the central point made by me remains completely unaddressed.

Alas, I cannot deal with those crazy-making techniques, his behaviour measured by outcome is indistinguishable from the mentally ill. With the help and advice from a friend, I came to the conclusion that it was not safe for me to respond, so I then decided not to.


The problem is, most of these bindings are out-of-date. Delphi from 2012, Basic from 2002, D from 2016. wxRuby is a dead link. wxAda was already dead in 2009, as the discussion I can google suggests.

So, if you use wxWidgets, you probably have to use either C++ or Python version, others are unlikely to be supported.


wxRuby has been resurrected as wxRuby3, see https://mcorino.github.io/wxRuby3/

Among actively developed bindings, there is also wxRust at https://crates.io/crates/wxdragon


I used cygwin pretty heavily in the late 90s and early 2000s. It was slow. I had scripts that took days to run dealing with some network file management. When I moved them over to linux out of frustration (I think I brought in something like a pentium 90 laptop, gateway solo I think?) they were done in tens of minutes.

I'm sure they did the best they could ... it was just really painful to use.


This matches me experience as well. Some of my earliest rsync experiences were with the Cygwin version and I can remember scratching my head and wondering why people raved about this tool that ran so slowly. Imagine my surprise when I tried it on Linux. Night and day!

> Cygwin had less overhead which mattered in a world of limited RAM and heavy, limited swapping (x86-32, limited I/O, PATA, ...).

Maybe so, but my memory of Cygwin was waiting multiple seconds just for the Cygwin CLI prompt to load. It was very slow on my machines.


Cygwin works fine if I am compiling stuff locally for my own use, but that cygwin1.dll (plus any dependencies) is a problem for distribution.

What I usually do is make sure my code builds with both Cygwin and MingW, and distribute the binaries built with MingW.


It's not just DLL hell. Cygwin was also notorious for being really out of date. Security vulnerabilities and missing features were both very common at one point.

I have used cygwin for 30 years and never had any dll hell issues, because all the programs came from the cygwin installer. Never once needed something outside it.

Meanwhile those that complained about Java, now ship a whole browser with their "native" application, and then complain about Google taking over the Web.

I think those are two solidly different camps of people

Technically correct by some estimation, perhaps, but Cygwin is a crazy approach, was slow (contrary to the implication of the "low cruft" claim), was not as compatible as these other approaches, required recompilation, and was widely disliked at most points in its life. There's a lot of crazy voodoo stuff happening in cygwin1.dll to make this work; it totally qualifies as "hacking in some foreign Linux plumbing", it's just happening inside your process. Just picture how fork() is implemented inside cygwin1.dll without any system support.

Cygwin doesn't work at all in Windows AppContainer package isolation; too many voodoo hacks. MSYS2 uses it to this day, and as a result you can't run any MSYS2 binaries in an AppContainer. Had to take a completely different route for Claude Code sandboxing because of this: Claude Code wants Git for Windows, and Git for Windows distributes MSYS2-built binaries of bash.exe and friends. Truly native Windows builds don't do all the unusual compatibility hacks that cygwin1.dll requires; I found non-MSYS2-built binaries of the same programs all ran fine in AppContainer.


> but Cygwin is a crazy approach, was slow

A lot of this is issues Microsoft could fix if they were sufficiently motivated

e.g. Windows lacks a fork() API so cygwin has to emulate it with all these hacks

Well, technically the NT API does have the equivalent of fork, but the Win32 layer (CSRSS.EXE) gets fatally confused by it. Which again is something Microsoft could potentially fix, but I don’t believe it has ever been a priority for them

Similarly, Windows lacks exec(), as in replace the current process with new executable. Windows only supports creating a brand new process, which means a brand new PID. So Cygwin hacks it by keeping its own PID numbers; exec() changes your Windows PID but not your Cygwin PID. Again, something Microsoft arguably could fix if they were motivated


> A lot of this is issues Microsoft could fix if they were sufficiently motivated...

They did fix it, in a sense, with WSL1 picoprocesses. Faster and more compatible than Cygwin. Real fork and exec on the Windows NT kernel. Sadly, WSL2 is even faster and more compatible while being much less interesting. WSL1 was pretty neat, at least, and is still available.

In any event, this diversion doesn't change my analysis of Cygwin. Cygwin still sucks regardless of whose fault it is. I intentionally left this stuff out of my post because I thought it was obvious that Cygwin is working around Windows limitations to hack in POSIX semantics; it's the whole point of the project. None of us can change Windows or Cygwin and they're both ossified from age and lack of attention. We have to live with the options we've actually got.

If you need a Windows build of a Linux tool in 2026 and can't use WSL, try just building it natively (UCRT64, CLANG64, MSVC, your choice) without a compatibility layer. Lots of tools from the Linux ecosystem actually have Windows source compatibility today. Things were different in the 90s when Cygwin was created.


Developing on cygwin, however, was a right pain. If a C library you wanted to use didn't have a pre-built cygwin version (understandable!) then you end up doing 'configure, make' on everything in the dependency tree, and from memory about two thirds of the time you had to edit something because it's not quite POSIX enough sometimes.

Ha ha doing Unix like it was 1989. At the time I thought configure was the greatest of human achievements since I was distributing software amongst Sun machines of varying vintage and a Pyramid. I want to say good times but I prefer now ha ha

autotools felt old even in 90's

Autotools was designed to produce a configure script with zero dependencies other than the compiler toolchain itself. I always thought it would be a good way to bootstrap a system configuration database (like the kind X11 already had, the name I forget) but it turned out to be too convenient to just drop autotools into every project instead.

So now even today, compiling any GNU package means probing every last feature from scratch and spitting out obscenely rococo scripts and Makefiles tens of thousands of lines long. We can do better, and have, but damn are there a lot of active codebases out there that still haven't caught up.


Reminds me of a fun weekend I spent ~5 years ago building the newest version of every GNU program I could get to build on NEXTSTEP 3.3 (running on 68k NeXT hardware) without major changes.

Nowadays MSYS2, which does depend on cygwin under the hood, offers such a package manager (pacman of Arch Linux) and it is quite a user friendly to run native POSIX binaries on Windows without a linux VM.

In my personal experience, Msys 2 would work great until it didn't. Unless this has changed, from what I remember, Msys2 compiled everything without PIC/PIE, and Windows does allow you to configure, system-wide, whether ASLR is used, and whether it's used "if supported" or always. If that setting is set to anything but off, Msys2 binaries will randomly crash with heap allocation errors, or they do on my system. It happened so much to me when I had actual coreutils installed that I switched to uutils-coreutils even though I knew that uutils-coreutils has some discrepancies/issues. Idk if they've fixed that bug or not; I did ask them once why they didn't just allow full ASLR and get on with things and they claimed that they needed to do non-ASLR compilations for docker.

MSYS2 is very confusing. When you pick "MSYS2", you are building exclusively for the MSYS2 target environment, and might not have proper compatible windows headers. When you pick "MINGW32/64", you are instead building for the normal windows environment, and get proper windows headers. But if you didn't know that, you would end up confused about why your program is not building.

It doesn't help that the package simply named "gcc" is for the MSYS2 target.


And just to add insult to injury, you probably don't want MINGW64 either, as it relies on the ancient MSVCRT.DLL C runtime library that lacks support for "new" features like C99 compatibility and the UTF-8 locale, and that Microsoft never supported for use by third-party applications in the first place.

Instead, you either want UCRT64 or CLANG64, depending on whether you want to build with the GNU or LLVM toolchains, as it uses the newer, fully-supported Universal C Runtime instead.


It's still useful to use MSVCRT in certain circumstances, such as targeting the earliest 64-bit versions of Windows.

As for UTF-8 support, it's the manifest file that determines whether Windows sets the ANSI code page to UTF-8. (There's also an undocumented API function that resets the code page for GetACP and the Rtl functions that convert ANSI into Unicode. But this would run after all the other DLLs have finished loading.) Having the code page correct is enough to support Unicode filenames and Unicode text in the GUI.

It just won't provide UTF-8 locale support for the standard C library.


Sure, or older 32-bit versions of Windows for that matter, or for building software that hasn't been ported to UCRT.

I can certainly relate to this: I'm currently sitting on a request for an enhancement to a product (currently running on a 32-bit Windows 10 VM) with a build system that has never been updated to support any Microsoft platform other than MS-DOS, or toolchain newer than Microsoft C 5.1.


> lacks support for "new" features like C99 compatibility

This made me laugh. It reminded me of course work I did in university that was clearly written many years before I took the course as it recommended we manually enable the "new" c99 standard in our compiler, which I guess survived in the documentation up through when I took the course, at which point it was still relevant since GCC was otherwise defaulting to C11 by the time I was using it.


w64devkit it's fine too; with just a few PATH settings and SDL2 libraries I could even compile UXN and some small SDl2 bound emulators.

https://github.com/skeeto/w64devkit


MSYS2 is my favorite in this area. Super lightweight and easy to use, highly recommend.

It's annoying to wade through six different versions of the same package for different runtimes and word sizes. Heaven forbid you accidentally install the wrong one.

Cygwin implements a POSIX API on Win32 with a smattering of Nt* calls to improve compatibility but there's a lot of hoop jumping and hackery to get the right semantics. Fork isn't copy on write, for one thing.

I was a Cygwin user from about 1999 to 2022 or so, spent a little time on wsl2 (and it's what I still use on my laptop) but I'm fully Linux on the desktop since last year.


Ha that tracks my own usage and timeline almost precisely, although I was using cygwin and WSL2 in parallel for a while. Lot of complaints about cygwin speed here, but NTFS filesystem access is actually a lot faster on cygwin than WSL2!

I thought WSL2 is functionally a virtual machine with deep host integration. That’s why you need HyperV.

Sort of. Technically speaking, just enabling hyper-v turns your base windows install into a VM. Wsl2 then just runs along side

Enabling hyper-v turns your base windows install into a VM host, not a virtual machine itself.

It's kind of both. Hyper-V is a bare-metal (type 1) hypervisor. Windows runs virtualized, one level above it, in a privileged (host) VM, next to other (guest) VMs.

https://en.wikipedia.org/wiki/Hyper-V#Architecture


Huh that’s interesting I didn’t realize that

Nope, the best way was VMWare Workstation, followed by Virtual Box.

And before those Virtual PC by Connectix. Which Microsoft bought and dumped.

More like they integrated the technology they cared about into their products.

>Cygwin was technically the correct approach

Requiring every single Linux app developer to recompile their app using Cygwin and account for quirks that it may have is not the correct approach. Having Microsoft handle all of the compatibility concerns scales much better.


Why not? That is just a matter of porting stuff over, like a FreeBSD ports collection, an apt repo, or a bunch of scripts for Proton/Wine such as Lutris.

Cygwin started in 1995. Microsoft wasn't cooperative with FOSS at all at that point. They were practicing EEE, and eating some expensive Unix/VMS machines with WNT.


I remember when I first put cygwin in my path on Windows and it felt like magic. I can just ssh and git now? No need for putty or WinGit????

I've been running colinux for years until early 2009 when I reinstalled my laptop with Ubuntu 8.04 and Windows XP in a VM. So much faster.

Assuming you were on NT-lineage, rebuilding for SFU (Interix) was the technically correct and nice implementation, though since a lot of Linux programs are non-portable (or have maintainers who mistakenly think they can do better than autotools) it was a pain in practice.

On Windows NT building software from source under Interix[0] (nee OpenNT, later "Subsystem for Unix Applications") was pretty nice.

Interix was implemented as proper NT kernel "subsystem". It was just another build target for GNU automake, for example.

(Being that Interix was a real kernel subsystem I have this fever dream idea of a text-mode "distribution" of NT running w/o any Win32 subsystem.)

[0] https://en.wikipedia.org/wiki/Interix


> However, it lacked the convenience of a CLI package manager back then

Cygwin still lacks that to this day, you have to fire up to GUI installer to update packages.

MSYS2 is cygwin with pacman cli.


I used to use LOADLIN.exe - worked pretty, IIRC



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: