FUD scoop It was one of those moments where you weren’t quite sure you heard something right.

It was day-whatever of the Supercomputing Asia conference in Singapore, and I was halfway listening to one of the speakers explain his company’s advances in deep learning and artificial intelligence. “Halfway” because the material of this particular talk was soon way, way over my head and on my laptop I was trying to figure out why Travis seemed to be borking on the update pull requests for the new Red Hat on GitHub site.

But I snapped back into the room when I thought I heard what sounded like a full-tilt FUD (fear, uncertainty, and doubt) rant about open source. I glanced at my colleague Rich Bowen, also in attendance, and he was shaking his head.

Yep, it was FUD all right. Suddenly, it was 2000 all over again.

The speaker’s arguments against open source was that the tools available in his particular space in computer science were “not as stable” as those his company offered as a proprietary solution. Nor were they as widely used. This was interesting, because all throughout the conference, Rich and I had seen quite a few mentions of the use of open source projects in artificial intelligence.

Other arguments the speaker used to deride open source technology:

  • Poor support and training
  • Poor efficiency
  • No access to state-of-the-art research and development
  • And (here we go) “intellectual property issues”

I don’t run around in the AI sector, so I can’t speak with any certainty about the state of the open source projects in this space, but given that the last two points in this list were pure old-school FUD, I’m willing to take a guess that the first two points were off the mark, too.

And I get it, this speaker was trying to make a pitch to the audience about their company’s software. But rather than highlight the features of their code, the speaker opted to knock down a whole known class of software that was in competition.

The facts around open source no longer support this kind of FUD. No state-of-the-art R&D? Really? The entire big data sector was innovated from the start with open source software. Much of the code in Internet of Things technology is open source. And even speakers at this same Supercomputing conference were praising the availability and feature sets of open source software like Kubernetes, PyTorch, and TensorFlow. In fact, given the largely academic affiliation of this conference’s attendees, I was submit this speaker was tone deaf to the fact that students and their teachers in cutting-edge tech like this tend to want to get into the guts of the code and optimize it. Can’t do that with proprietary software.

As for the IP issues dog-whistle argument, it has been shown time and again that the use of free and open source software is no riskier than license compliance on any other kind of software. It’s really very simple: use software, abide by its license. Any sort of patent threat isn’t something unique to open source software: it’s just the weaponization of IP by organizations and patent trolls to try to get money for work they didn’t do. Sad, but not an inherently open source-only problem.

Again, open source is not magic pixie dust to make any software amazing. But it goes both ways: software isn’t all bad because you can see its code, either.

All code has its good and bad qualities whether you can see the source. Which is why FUD belongs in the past.

Image by Libby Levi under CC BY-SA 2.0 license.


关于作者

Brian Proffitt is Senior Manager, Community Outreach within Red Hat's Open Source Program Office, focusing on enablement, community metrics and foundation and trade organization relationships. Brian's experience with community management includes knowledge of community onboarding, community health and business alignment. Prior to joining Red Hat in 2013, he was a technology journalist with a focus on Linux and open source, and the author of 22 consumer technology books.

Read full bio