![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I suppose there are problems in many teams, yes - the majority of humanity is just not mature enough to treat each other professionally :/
Still - 4 out of the 5 founding companies being pure evil does not fill me with confidence :/
I suppose there are problems in many teams, yes - the majority of humanity is just not mature enough to treat each other professionally :/
Still - 4 out of the 5 founding companies being pure evil does not fill me with confidence :/
Taken from the wikipedia page on rust:
On February 8, 2021, the formation of the Rust Foundation was announced by its five founding companies (AWS, Huawei, Google, Microsoft, and Mozilla).[36][37] In a blog post published on April 6, 2021, Google announced support for Rust within the Android Open Source Project as an alternative to C/C++.[38]
Four out of five founding companies are evil to the bone, with only Mozilla being somewhat reputable. That does not give me much confidence, sadly.
On November 22, 2021, the Moderation Team, which was responsible for enforcing community standards and the Code of Conduct, announced their resignation “in protest of the Core Team placing themselves unaccountable to anyone but themselves[39]”
How am I not surprised?
In May 2022, the Rust Core Team, other lead programmers, and certain members of the Rust Foundation board implemented governance reforms in response to the incident.[40]
At least that. However, I don’t care enough for the time being to spend my morning on reading what exactly they implemented.
Thanks for laying out your concerns. As a C++ developer who does not know the other languages you speak of (I assume Rust, Go), I can agree to some of your points, but also some of them I see differently:
C++ can be complex, because it has a lot of features and especially the newer standards have brought some syntax that is hard to understand or read at times. However, those elements are not frequently used, or if they are, the developer will get used to them quickly & they won’t make development slow. As a matter of fact, most development time should be spent on thinking about algorithms, and thinking very well before implementing them - and until implementation, the language does not matter. I do not think that language complexity leads to increased bugs per se. My biggest project is just short of 40k lines of code, and most of the bugs I produced were the classical “off by one” or missing range checks, bugs that you can just as well produce in other languages.
C++ no longer requires you to do manual memory management - that is what smart pointers are for, and RAII-programming.
I can’t make a qualified comment on that, due to lack of expertise - you might be right.
You’re somewhat repeating point 1) here with slow development. But you raise a good point: web standards have become insane in terms of quantity and interface sizes. Everyone and their dog wants to reinvent the wheel. That in itself requires a very large team to support I would say. As stated for point 1), I do not agree development in C++ has to be slower
True, as someone who just suffered from problems introduced on windows (cygwin POSIX message queues implementation got broken by Win10, and inotify does not work on Windows Subsystem for Linux) I can confirm that while the C++ standard library is not much of a problem, the moment you interface with the host OS, you leave the standard realm and it becomes “zombieland”. Also, for some reason, the realtime library implementation on MacOS is different, breaking some very simple time-based functions. So yeah, that’s annoying to circumvent, but can be done by creating platform specific wrapper libraries that create a uniform API. For other languages, it appears this is done by the compilers, which is probably better - meaning the I/O operations got taken into those language’s core features
I am highly doubtful of people relying on garbage collection - a programmer that doesn’t know exactly when his objects come into existence, and when they cease to exist is likely to make much bigger mistakes and produce very inefficient code. The aforementioned smart pointers in C++ solve this issue: object lifetime is the scope of the smart pointer declaration, and for shared pointers, object lifetime expires when the last process using it leaves the scope in which it is declared. For concurrent programming, I do not know if you mean concurrency (threads) or multiple people working on the same project. While multi-threading can be a bit “weird” at first, you have a lot of control over shared variables and memory barriers in C++ that might enable a team to produce a browser that is much faster, which I believe is a core requirement towards modern browsers
As for your tl;dr: definitely not “less concurrency”, that makes no sense. The other points may or may not be true, keeping in mind the answers I gave above.
Not sure if you are trying to be funny, but if not: enlighten us?
High five, brother :) I think the XP crowd was just the generation of “one step more tolerant towards privacy intrusions” / not quite computer knowledgeable enough to understand the implications of letting your operating system phone home. In terms of user interface, it was indeed tolerable - you could still configure it to look and behave like Win2K mostly, which is what I had to do for work for quite a long time.
Compared to Win2k, it would just be a resource-hog. :/
Agreed, XP was the turning point - I decided I will never let such an intrusive software on my private computers, so I switched from Win2k to Linux.
I liked Win2K, yes - then Linux :)
My trivial (non legal ;) answer is: If you are working for a corporation that is looking to patent something / make something closed license: the moment you ever looked at a single line of my code relevant to what you are doing, you are forbidden from releasing under any more restrictive license. If you are a private person working on open source? Then you be the judge whether you copied enough of my code that you believe it is more than just “inspired by”.
again, I don’t have a problem with copying code - but I as a developer know whether I took enough of someone else’s algorithm so that I should mention the original authorship :) My only problem with circumventing licenses is when people put more restrictive licenses on plagiarized code.
And - I guess - in conclusion, if someone makes a license too free, so that putting a restrictive (commercial) license or patent on plagiarized / derived work, that is also something I don’t want to see.
As I am a big proponent of open source, there is nothing wrong even with copying code - the point is that you should not be allowed to claim something as your own idea and definitely not to claim copyright on code that was “inspired” by someone else’s work. The easiest solution would be to forbid patents on software (and patents altogether) completely. The only purpose that FOSS licenses have is to prevent corporations from monetizing the work under the license.
“Why does no one say murder is bad unless China is murdering”
I can not fathom how you absolutely nailed the essence of my comment, yet misunderstood it (and - arguably - your own example) so fundamentally.
Let me try to help, once:
“Why do most people not complain about murder when Microsoft is doing it, but when China is doing it, the very justified outrage can be heard?”
With the obligatory “fuck everyone who disregards open source licenses”, I am still slightly amused at this raising eyebrows while nearly no one is complaining about MS using github to train their copilot LLM, which will help circumvent licenses & copyrights by the bazillion.
“barely any” is neither entirely accurate, nor does it excuse the use of flatpaks.
That is indeed exactly my point. LLMs are just a language-tailored expression of deep-learning, which can be incredibly useful, but should never be confused for any kind of intelligence (i.e. logical conclusions).
I appreciate that you see my point and admit that it makes some sense :)
Example where I think pattern recognition by deep learning can be extremely useful:
But what I am afraid is happening for people who do not see why a very simple algorithm is already AI, but consider LLMs AI, is that they mentally decide to call AI what seems “AGI” / “human-like”. They mistake the patterns of LLMs for a conscious being and that is incredibly dangerous in terms of trusting the answers given by LLMs.
Why do I think they subconsciously imply (self-)awareness / conscience? Because to not consider as (very limited) AI a control mechanism like a simple room thermostat, is viewing it as “too simple” to be AI - which means that a person with such a view makes a qualitative distinction between control laws and “AI”, where a quantitative distinction between “simple AI” and “advanced AI” would be appropriate.
And such a qualitative distinction that elevates a complex word guessing machine to “intelligence”, that can only be made by people who actually believe there’s understanding behind those word predictions.
That’s my take on this.
AI did boom, but people don’t realize the peak happened a year ago.
A simple control algorithm “if temperature > LIMIT turnOffHeater” is AI, albeit an incredibly limited one.
LLMs are not AI. Please don’t parrot marketing bullshit.
The former has an intrinsic understanding about a relationship based in reality, the latter has nothing of the likes.
If the AI boom is a dud,
Whaddya mean, “if”? Emperor wears no clothes…
beyond root processes, none that I am aware of. Hence I configured all my internet applications and steam to run in a jail :) firejail & bubblewrap come as native packages, unlike the flatpak contents
isn’t flatpak by definition relying on a second software source, hence 2x as much risk as relying on a single source (your OS repo)?
The Last Door (1 & 2)
Darkside Detective
Thank me later.
You had some valid points as well - I enjoy a good constructive exchange, thank you! :)