F-Droid build servers can't build modern Android apps due to outdated CPUs
On August 7, 2025, a new build problem started hitting many Android apps on F-Droid. Many Android apps on F-Droid have been unable to publish updates if they use Android Gradle Plugin (AGP) 8.12.0 or Gradle 9.0.
The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
As an example, my open-source app MBCompass hit this issue. I downgraded to AGP 8.11.1 with Gradle 8.13 to make it build, but even then, F-Droid failed due to a baseline profile reproducibility bug in AGP. The only workaround was disabling baseline profiles and pushing yet another release.
This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
References:
- F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593 - Catima example: https://github.com/CatimaLoyalty/Android/issues/2608 - MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?
https://developers.redhat.com/blog/2021/01/05/building-red-h...
Think of how much faster their servers would be with one of those Epyc consumer cpus.
I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.
https://opencollective.com/f-droid#category-BUDGET
Not sure if this includes their Librapay donations either:
https://liberapay.com/F-Droid-Data/donate
> This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?
This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.
It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.
So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.
A developer on the ticket writes: "Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3"
Ooh. They are at least ~15 years old, then. Maybe they have scored on some old, 4 socket Dell R815s. 48 cores ain't that bad for a build server.
It's kinda good they use such old systems, as the vast majority of pollution occurs during manufacturing of devices since we usually use them only a handful of years. Iirc the break-even point was somewhere around 25 years, as in, upgrading for energy efficiency then becomes worth it (source: https://wimvanderbauwhede.codeberg.page/articles/frugal-comp...). 15 goes a long way towards that!
On the other hand, I didn't dig very deep into the ticket history now but it sounds like this could have been expected: it broke once already 4 years ago (2021), so maybe planning an upgrade for when this happens again would be good foresight. Then again, volunteers... It's not like I picked up the work as an f-droid user either
While I appreciate the sentiment, I think you may be misreading the "Emissions from production of computational resources" section of that link.
It says for servers that 13-21 years is the break even for emissions from production vs consumption.
The 25 year number is for consumer devices like phones and laptops.
I would also argue that average load on the servers comes into play.
$2-3k ? That’s barely the price of a lower end Threadripper bare cpu not a full Epyc server ???
At our supplier $2k would pay for a 1U server with a 16 core 3GHz Epyc 7313P with 32GB RAM, a tiny SSD and non-redundant power.
$3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).
All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.
Yes but thoose are Zen 3 Milan cpu released in 2021 I believe.
Not that they are bad and would not be way better than what they have, just that I though the parent was quite the optimist with his Zen4/Zen5 pricing.
OP did say "consumer Epyc", so presumably referring to the parts using the AM5 socket. From a quick check on Newegg, it looks like barebones servers for that platform start at under $1000, to which you need to add CPU, RAM, and storage. So a $3000 budget to assemble a low-end Zen4/5 EPYC server is realistic: $570 for the 16-core EPYC 4565P, a few hundred for DDR5 ECC unbuffered modules, a few hundred for an enterprise SSD, and you have a credible current-gen server from readily available parts at retail prices, without any of the enterprise pricing and procurement hassle.
That was my intention; mATX AM5 parts.
I imagine they would need quite a few servers to replace their current setup.
Then there's also the overhead of setting up and maintaining the hardware in their location. It's not just a "solve this problem for ~$2,000 and be done with it".
I don't know the actual specs or requirements. Maybe 1 build server is sufficient, but from what I know there's nearly 4,000 apps on FDroid. 1 server might be swamped handling that much overhead in a timely manner.
One server with today's tech can easily replace several servers that are 12+ years old. 4000 apps doesn't sound like a lot of work for one machine, unless you assume almost all of them are releasing new builds more than once a week. A 16-core CPU can rebuild a full Gentoo desktop OS multiple times a week.
Is that $2k/$3k for the year?
That's $2k/3k to get a box with fully assembled hardware delivered to your doorstep or to a DC of your choice.
Space in your basement or the colo rack of a datacenter along with power, data and cooling is an expense on top. But whatever old servers they have are going to take up more space and use more power and cooling. Upgrading servers that are 5+ years old frequently pays for itself because of the reduced operating costs (unless you opt for more processing power at equal operating cost instead)
Low end EPYC (16-24 cores) especially for older generations are not that expensive 800-1.2K ime. Less when in a second hand server.
Perhaps the servers run Coreboot / Libreboot?
I'm not even sure mainline Linux supports machines this old at this point. The cmpxchg16b instruction isn't that old, and I believe it's required now.
CMPXCHG8B is required as of a month or two ago, not 16B (i.e., the version from the 90's is now required)
See https://lkml.org/lkml/2025/4/25/409
32 bit Linux is still supported by the kernel and Debian, Arch, and Fedora still supports baseline x86_64.
RHEL 8 is still supported and Ubuntu is still baseline x86_64 I believe for commercial distros. Not sure about SuSE.
> 32 bit Linux is still supported by the kernel and Debian
Deprecated for Debian
https://www.debian.org/releases/stable/release-notes/issues....
> about to ask people to donate, but they have $80k in their coffers
I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.
From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.
It isn't like they don't have any other things to fix or address.
I would too but do you have a link to them talking about it?
>they have $80k in their coffers but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers
I would also like to know this.
I would much rather they spent that on having the devs network and travel, the servers work.
Why are the builds failing then?
planned obsolescence by Google
Beginning to use a CPU opcode that is 19 years old doesn't feel like planned obsolescence. if anything, it feels like unplanned obsolescence... "Oh hell what do you mean your CPU doesn't have that opcode no we've just been running the compiler with the default flags and that opcode got added to the default two months ago after a 10-year fight about the possible consequences of changing defaults!"
Although I'm a little surprised to learn that the binary itself doesn't have enough information in its header to be able to declare that it needs SSSE3 to be executed; that feels like something that should be statically-analyzed-and-cached to avoid a lot of debugging headaches.
> "Oh hell what do you mean your CPU doesn't have that opcode [...]"
hobbyst dev? sure
Google? nope
Did they make any explicit guarantees that their newly-cut binaries would continue to support 20-year-old architectures?
Googlers aren't gods. It's a 100,000-person company; they're as vulnerable to "We didn't really think of that one way or the other" as anyone else.
ETA: It's actually not even Google code that changed (directly); Gradle apparently began requiring SSSE3 (https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153) and Google's toolchain just consumed the new constraint from its upstream.
Here, I'm not surprised at all; Google is not the kind of firm that keeps a test-lab of older hardware for every application they ship, so (particularly for their dev tooling) "It worked on my machine" is probably ship-worthy. I bet they don't even have an explicit architecture target for the Android build toolchain beyond the company's default (which is generally "The two most recent versions" of whatever we're talking about).
They clearly don't
Yeah and everybody was complaining how slow the builds are for years. I really want to know too
Probably a case of "don't fix it if it ain't broke" keeping old machines in service too long, so now they broke.
That's like ignoring your 'Check Engine' light because the engine still runs.
This is pretty concerning, especially as FDroid is by far the largest non-google android store at the moment, something that I feel is really needed, regardless of your feelings about google.
Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)
I agree it’s a bit concerning but please keep in mind F-Droid is a volunteer-run community project. Especially with some EU countries moving to open source software, it would be nice to see some public funding for projects like F-Droid.
> please keep in mind F-Droid is a volunteer-run community project.
To, me, that's the worrying part.
Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)
Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.
The EU governments should gradually start switching to open source solutions. New software projects should be open source by default and only closed if there is a real reason for it.
The EU is already home to many OS contributors and companies. I like the Red Hat approach where you are profitable, but with open source solutions. It's great for governments because you get support, but it's much easier to compete, which reduces prices.
Smaller companies also give more of their money to open source. Bigger companies can always fork it and develop it internally and can therefore pressure devs to do work for less. Smaller companies have to rely on the projects to keep going and doing it all in house would be way too expensive for most.
> I like the Red Hat approach where you are profitable, but with open source solutions.
The Red Hat that was bought by IBM?
I agree with your goals, but the devil is in the methods. If we want governments to support open source, the appropriate method is probably a legislative requirement for an open source license + a requirement to fund the developer.
idk if you meant this, but I thought of F-Droid and other major open source projects being publicly funded by EU.
It seems like every other year I read a story about Munich switching to Linux. It keeps happening so evidently it's not sticking very well. Either there are usability or maintenance problems, or Microsoft's sales and lobbying is too effective.
Apple has an iPhone app store monopoly, but Google is the bad guy here?
hogwash
>But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Always has been.
Google has recently lost two cases against DoJ, keeping fingers crossed that Android will be divestituted.
It's interesting to me how people panicked about the idea that 23AndMe's bankruptcy implies that some unknown, untrusted third-party will have their genetic information, but people are also crowing at the idea that a company that has purchase history on all your smartphone apps (and their permissions, and app data backup) could be compelled by the government to divest that function to some unknown, untrusted third-party.
Hope I didn't come across as criticising FDroid here- It seems sucky to have build requirements change under your feet.
It's just I think that FDroid is an important project, and hope this doesn't block their progress.
> Nice to see some public funding for projects like F-Droid
Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!
Maybe if f-droid is important to you, donate, so they can buy newer build server?
I'm not quite sure if I'm over reading into this, but this comes across as a snarky response as if I've said "boo, fdroid sucks and owes me a free app store!".
Appologies if I came across like that, here's what I'm trying to convey:
- Fdroid is important
- This sounds like a problem, not necessarily one that's any fault of fdroid
- Does anyone know of a plan to fix the issue?
For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?
You are right, my message comes through as too snarky. What I wanted to give is an actionable item for the readers here.
This has now become a major issue for F-Droid, as well as for FOSS app developers. People are starting to complain about devs because they haven't been able to release the new version for their apps (at least it doesn't show up on F-Droid) as promised
Is Westmere the minimum architecture needed for the required SSE?
Server hardware at the minimum v2 functionality can be found for a few hundred dollars.
A competent administrator with physical access could solve this quickly.
Take a ReaR image, then restore it on the new platform.
Where are the physical servers?
Zen 2 Epyc would barely double the price of older platforms if you buy an entire server, and would run circles around them.
A slow computer that does what you want is infinitely more valuable than a fast computer that does not.
why would a fast computer refuse to do what you want?
Have you tried to get root on a phone lately? That requires strategy.
1. That's still perfectly possible 2. We're talking about x86_64 CPUs here that have been open to install your own software basically since they existed
Did and doing regularly.
> Are google looking into rolling back the requirement? (this last one sounds unlikely)
That's apparently what they did last time. From the ticket:
"Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"
> FDroid is by far the largest non-google android store at the moment
Not even sure it's in the top 10
Wait really? What other ones are there!? Somebody's already pointed out Samsumg Galaxy store, but I don't think I know of others?
Edit: searching online found this if anyone else is interested https://www.androidauthority.com/best-app-stores-936652/
There are at least six Android app stores in China that have more than 100 million MAUs each: Huawei AppGallery, Tencent MyApp, Xiaomi Mi Store (or GetApps), Oppo, Vivo, and Honor stores.
Huawei and Honor are seperate app stores?
And Oppo and Vivo too?
In both instances one company owns the other - why have competing app stores?
Because some dumbass decided to ban Huawei before, forcing Chinese brands to split itself to multiple sub brandings that operate independently.
Huawei was banned because some dumbass at Huawei decided that sanction skirting was worth it
Ref: https://www.nbcnews.com/news/all/u-s-says-chinese-telecom-gi...
Amazon has a big one too. I also know of a popular one called Aptoide.
Amazon closes their app store on 2025-08-20, so in 7 days.
*for non Fire devices.
I could've sworn they'd already closed it for non-Fire devices.
I think we only know about F-Droid because it's the only high quality one.
Low quality software tends to be popular among the general public because they're very bad at evaluating software quality.
>FDroid is by far the largest non-google android store at the moment
Samsung Galaxy Store is much much bigger.
Funny true story: I got my first smartphone in 2018, a Samsung Galaxy A5. I have it to this day, and it is the only smartphone I ever used. This is the first time I hear about Samsung Galaxy store! (≧▽≦)
Largest not run by the corporations then ;)
Yup! I missed that one because I didn't realise it still existed. Woops!
why you read "google build tools cannot be built from source and it was compiled with an optional optimizations as required" and assume the right thing to do is to buy newer servers?
Why not recompile aapt2 to correct target? It seems to be source available.
https://android.googlesource.com/platform/frameworks/base/+/...
Have you tried building AOSP from available sources?
Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.
"Binaries everywhere"
So much for "Open Source"
The binaries are open source, but Google doesn't design their build chain to recompile from scratch every time.
Also, you don't need to compile all of AOSP just to get the toolchain binaries.
With how strict F-Droid is I would have expected them to build from source all the way down. Though that sounds like a daunting task so I don't blame them.
Everything is open source, if you can read assembly ;)
Machine code. Assembly is higher level. since data and instructions can be mixed machine code is harder to decode - that might be a byte of data or an instruction. Mel would have [ab]used this fact to make his programs work. It is worse on x86 where instructions are not fixed length but even on arm you can run into problems at times
You can always lift machine code to assembly. Its a 1 to 1 process.
No you cannot. While it is 1 to 1, you still need to know where to start as if you start at the wrong place data will be interrupted as an asm instruction and things will decode legally - but invalidly. It is worse on CISC (like x86) where instructions are different length and so you can jump to the middle byte of a long instruction and decode a shorter instruction. (RISC sometimes starts to get CISC features as they add more instructions as well).
If the code was written reasonably you can usually find enough clues to figure out where to start decoding and thus get a reasonable assembly output, but even then you often need to restart the decoding several times because the decoder can get confused at function boundaries depending on what other data gets embedded and where it is embedded. Be glad self modifying code was going out of style in the 1980's and is mostly a memory today as that will kill any disassembly attempts. All the other tricks that Mel used (https://en.wikipedia.org/wiki/The_Story_of_Mel) also make your attempts at lifting machine code to assembly impossible.
It definitely isnt a 1:1 process, as there are multiple ways to encode the same instruction (with possibly even having some subtle side effects based on the encoding)
https://youtu.be/eunYrrcxXfw
... this is why we get DRM. Source modification is what hurts them.
Yes. Sources available means nothing without a reproducible build process.
So open source is only in the name, noted
Debian also seems to have given up.
Using Docker with QEMU CPU emulation would be a more maintainable solution than recompiling aapt2, as it would handle future binary updates automatically without requiring custom patches for each release.
Might be worth noting that several devs have suggested users use IzzyOnDroid instead. Due to IzzyOnDroid distributing official upstream builds (after scanning), they're not dependent on any build server.
Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).
IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.
https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions#Late...
Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.
However at the same time, not even offering a fallback path in non-assembly?
> However at the same time, not even offering a fallback path in non-assembly?
There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)
Or even, a compiler told to target nothing in particular, and a default finally toggled over from "Oh, we're 'targeting x86'? So CPUs from the early 2000s then" to "Oh, we're 'targeting x86'? So CPUs from the mid-2010s then."
Looking at the issue their builders seem to be Opterons G3 (K10?)[0]
[0] https://en.wikipedia.org/wiki/AMD_10h
at this point they're guzzling so much power the electricity is more expensive than replacement platform
I can imagine this has to be like that as they usually get $1500 per month in donations.
You could buy a newer one but I guess they have other stuff they have to pay for.
For $500 you can get a decent refurbished server on ebay that supports those “new” extensions
I am 100% sure that if they put out a call for action and asked for hardware donations they would be able to get newer stuff. Ryzen 7 1700 goes for as cheap as 50$, DDR4 ram at supported speeds (2133 MHz) is also dirt cheap.
$1500 / month is probably swallowed by how much of powerpigs those Opertons are, like they are bad, real bad.
This is a bit of vicious circle. How much of that money goes even into keeping those servers running? The electricity bill alone, geez. They could do a dedicated fundraiser to get themselves two boxes that are a decade old and still have spare parts available, coming from Broadwell era, they will have enough instruction set support to cover the baseline towards which multiple distros are converging (Haswell and up).
Given their target audience, they could probably just request a hardware donation. Some sysadmin out there is probably getting rid of exactly what they need.
if it's colocated (surely the case) they aren't paying per kWh
>$1500/month
Wow, i just got into newpipe/fdroid. Its neat to think even a donation the size of mine can be almost individually meaningful :)
I have a home server with a 9th gen i7 that's doing jack sh!t most of the time, is there a way to donate some compute time to build F-Droid packages?
The problem with offering fallbacks is testing -- there isn't any reasonable hardware which you could use, because as you say it's all very old and slow.
I'm sure theyll appreciate your old desktop donation
I don't fully understand: aren't gradle and aapt2 open-source ?
If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?
SDK binaries provided by Google are still used, see https://forum.f-droid.org/t/call-for-help-making-free-softwa...
I agree, this should be the case, but Gradle specifically relies on downloading prebuilt java libraries and such to build itself and anything you build with it, and sometimes these have prebuilt native code inside. Unlike buildroot and any linux distribution, there's no metadata to figure out how to build each library, and the process for them is different between each library (no standards like make, autotools and cmake), so building the gradle ecosystem from source is very tedious and difficult.
having worked with both mvn and gradle, i always have a good chuckle when i hear about npm "supply chain" hacks.
Apparently it was fixed upstream by Google?
https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153
Not sure how long it will take to get resolved but that thread seems reassuring even if there isn't a direct source that it was fixed.
It is not fixed.
In the thread you linked to people are confusing a typo correction ("mas fixed" => "was fixed") as a claim about this new issue being fixed.
The one that was fixed is this similar old issue from years ago: https://issuetracker.google.com/issues/172048751
Oh, that's unfortunate, very confusing thread.
Still haven't. Currently, most of the devs aren't aware of this underlying issue!
As far as I can see, sse4.1 has been introduced in CPUs in 2011. That's more than 10 years ago. I wonder why such old servers are still in use. I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
Does anyone know the numbers of build servers and the specs?
> I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).
And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.
1: https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...
You can buy a mini pc for less than $550. For $200 on Amazon you can get an N97 based box with 12 GB RAM and 4 cores running at 3 GHz and a 500 GB SATA SSD. That’s got to be as fast as their current build systems and supports the required instructions.
Those single memory channel shitboxes aren't even fast enough to be usable during big windows updates let alone used in production.
One channel of DDR5-4800 actually competes pretty well against four channels of DDR3-1333 spread across two chiplets, which was the best Opteron configuration old enough to not have SSE4.1.
if you don't understand bandwidths and how long componenets can run at the 80pctile before failure, you're out of your element in this discussion.
It has been introduced in Intel Penryn, in November 2007.
However the AMD CPUs did not implement it until Bulldozer, in mid 2011.
While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.
SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.
I haven’t seen the real answer that I suspect here - the build servers are that one dual socket AMD board which runs open firmware and has no ME/PSP .
On the server side, probably not, but I'd like to point out that old hardware is not uncommon, and it's going to be more and more likely as time passes especially in the desktop space.
I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.
I was going to say that I assume that the reason for such old CPUs is the ability to use Canoeboot/GNU Boot. But you absolutely can put an SSE4.2 CPU in a KGPE-D16 motherboard. So IDK.
Because setting up servers is an annoying piece of grunt-work that people avoid doing more than absolutely necessary, there's an reason the expensive options of AWS,Azure and Google cloud make money because much "just works" when focusing on applications rather than the infra (until you actually need to do something advanced and the obscure commands or clicking bites you in the ass).
A few months ago Adobe finally updated Lightroom Classic to require these processor extensions. To squeeze all of the matrix mults it can for AI features also in CPU mode.
It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.
Hardware after the first couple of generations of x86_64 muliticore processors are perfectly capable machines to use as servers, even for tasks you want to put off to a build farm.
> Google’s new aapt2 binary in AGP 8.12.0
Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.
Relatedly, we don't really have any up to date free software build of the Android SDK AFAIK. To build Android apps, we all rely on the Google binaries, which are non-free.
https://forum.f-droid.org/t/call-for-help-making-free-softwa...
References:
F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593
Catima example: https://github.com/CatimaLoyalty/Android/issues/2608
MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
The Catima thread makes FDroid sound like a really difficult commmunity to work with. Although I'm basing this on one person's comment and other people agreeing, not on any knowledge or experience.
> But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.
F-droid are thoroughly understaffed and yet incredibly ambitious and shrewd around their goals - they want to build all the apps in a reproducible manner. There’s lots of friction around deviating from builds that fit within their model. The system is also slow, takes a long while before a build shows up. I think f-droid could benefit immensely from more funding, saying that as someone who has never seen f-droid’s side, but have worked on an app that was published there.
I saw that too and was wondering what kind of drama happened in the past
Very unexciting stuff; it's just your typical long-running FOSS project issues as I understand it. Lead maintainer of F-Droid is entrenched in his ways "cuz it works for me", which leads to stonewalling any attempts to change or improve the F-Droid workflow[0], but since he holds the keys to the kingdom (and the name recognition prevents forks), they keep him around.
Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.
F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)
[0]: Basically the inverse of https://xkcd.com/1172/
This is the sort of stuff that makes me want to pursue FIRE. There's so much good that could be done, but isn't because people need to be making money for someone else.
Then again who is to say that I would be a better custodian than this guy?
I like your energy; and I like your awareness that more control/different center of power may not help. This is where community-oriented leadership techniques could go a long way. To build trust, maintain peoples' roles and dignity, but to increase that awareness and enable floodlight focus (big picture) in addition to flashlight focus.
Their servers are so old, even an entirely different architecture emulating x86_64 would still see a performance increase... So there's no OSS argument here - they could even buy a Talos, have no closed firmware, and still see a performance increase with emulation. If they don't care about the firmware, there are plenty of very cheap x86 options which are still more modern.
> Their servers are so old
When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”
It seems quite implausible that F-Droid is actually running on hardware that predates those instruction set extensions. They're seeing wider adoption by default these days precisely because hardware which doesn't support them is getting very rare, especially in servers still in production use. Are you sure this isn't simply a matter of F-Droid using VMs that are configured to not expose those instructions as supported?
This is sort of like a bug I hit last year when the mysql docker container suddenly started requiring x86-64-v2 after a patch level upgrade and failed to start: https://github.com/docker-library/mysql/issues/1055
That’s a tough one. It’s ironic that the very platform meant to keep apps open and accessible is now bottlenecked by outdated hardware.
Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.
Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.
I've said this before, but I'll say it again. Running on donations is not a viable strategy for any long-term goal. FOSS needs to passively invest the donations. That is a viable long-term strategy. Now when things like this happen, it becomes a major line item moment, and not a limp-along situation, with yet another WE NEED YOUR HELP banner blocking off 1/2 their website.
I'm a bit lost in this thread, but I've written up what I know for other dummies like me
Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets
Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?
> Our machines run older server grade CPUs
So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.
Perhaps there should be more than one F-Droid
For example, if they published their exact setup for building Android apps so others could replicate it
How many Android users compile the own apps they use
Perhaps increasing that number would be a goal worth pursuing
Fortunately the source code is available:
https://android.googlesource.com/platform/frameworks/base/+/...
If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.
"If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd"
The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow
There is no point for Google to push planned obsolescence on the PC or server space. They don't have a market there.
It does benefit them to make it harder for competitors.
When you mention "competitors," what industries or markets are you referring to?
No one would write Android apps on a Chromebook, and making it harder to do so would only reduce the incentive for companies to develop Android apps.
How could Google benefit from pushing a newer instruction set standard on Windows and macOS?
The one moderately popular competitor is the project in the OP that is suffering directly from this upstream change.
I doubt Google even cares about F-Droid. The Play Store competes with the iOS App Store, Huawei's App Galery, and probably the Samsung Store long before F-Droid becomes relevant.
If they required a Google-specific Linux distro to build this thing or if they went the Apple route and added closed-source components to the build system, this could be seen as a move to mess with the competition, but this is simply a developer assuming that most people compiling apps have a CPU that was produced less than 15 years ago (and that the rest can just recompile the toolchain themselves if they like running old hardware).
With Red Hat and Oracle moving to SSE4.1 by default, the F-Droid people will run into more and more issues if they don't upgrade their old hardware.
While your perspective makes some sense, it's highly improbable. It's unlikely that Google was aware of F-Droid's infrastructure specs, or its inability to fix the issue in advance.
It seems you're suggesting a very specific, targeted attack.
> It seems you're suggesting a very specific, targeted attack.
Yes, just like it happened with Firefox: https://news.ycombinator.com/item?id=38926156
Like it is a one-off thing to support some system. You must maintain it and account it for all the features you bring in going forward.
The Win95 API is pretty incomplete. That was actually a terrible OS. The oldest I'd go playing this game with anything serious is probably XP.
It can read files, write files, and allocate memory. Is there anything else you need to compile software?
Can it? Files on Windows 95 and files on most Unix-like OSes are very different things.
They're the same from the perspective of a stream of persistent bytes.
If you want "very different" then look at the record-based filesystems used in mainframes.
Do you have any recommended reading about record-based filesystems?
But you don't, so you won't, scoring one for the planned obsolescence crowd.
And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.
That F-Droid even requires to do the build is one of the reasons I created Discoverium.
https://github.com/cygnusx-1-org/Discoverium/
That F-Droid requires to do the build ensures all apps provided by F-Droid are free software (as in freedom) and proven to be buildable by someone other than the app developer
The issue is more complicated than that.
Do you mean the overall issue or that F-Droid’s guarantees are arguable? The guarantees may not be the whole discussion, but for many they are the most relevant piece.
Edit: or perhaps you mean that isn’t the only way to provide such guarantees, which is the implication I got reading your other replies.
How so?
> and proven to be buildable by someone other than the app developer
Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.
So I should take a binary from a random stranger because trust me bro?
It is a modified version of Obtainium. You get it from the author via GitHub.
I’ve got an old Ivy Bridge-EP Dell workstation they can borrow goddamn SSE4.1 is nearly old enough to drink.
SSE4.1 can legally buy lightly alcoholic beverages in various European countries already. Next year, it can buy strong spirits.
Using AMD hardware that's "only" 13 years old can also cause this problem, though.
Yeah I was kind of shocked too. Core 2 could do both of those instruction sets. A used Dell Precision can be had for very little and probably would be grossly more efficient than whatever they're using.
Man, Android could have been way cooler if it actually used real virtual machines, or at least the JVMs.
I stood by Oracle, because in the long term as it has been proven, Android is Google's J++, and Kotlin became Google's C#.
Hardly any different from what was in the genesis of .NET.
Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.
And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.
Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.
At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.
> Kotlin became Google's C#
Are Google buying Jetbrains?
They almost could, after all they have outsourced most of the Android tooling efforts to JetBrains, given that Android Studio is mostly InteliJ + Clion, and Kotlin is the main Android language nowadays.
Also Kotlin Foundation is mostly JetBrains and Google employees.
ARM phones didn't have virtualisation back in the day so that would've been impossible.
Modern Android has virtual machines on devices with supported hardware+bootloader+kernels: https://source.android.com/docs/core/virtualization
JVM??? hell no, native FTW
I thought SSE 4.1 dates back to 2008 or so?
The build servers appear to be AMD Opteron G3s, which only support part of SSE4 (SSE4a). Full SSE4 support didn't land until Bulldozer (late 2011).
I appreciate that this is a volunteer project, but my back of the hand math suggests that if they upgraded to a $300 laptop using a 10nm intel chip, it would pay for itself in power usage within a few years. Actually, probably less, considering an i3-N305 has more cores and substantially faster single thread.
And yes, you could get that cost down easily.
Yes, a used laptop would be an upgrade from server hardware of that vintage, in performance and probably in reliability. If they're really using hardware that old, that is itself a big red flag that F-Droid's infrastructure is fragile and unmaintained.
(A server that old might not have any SSDs, which would be insane for a software build server unless it was doing everything in RAM.)
How is it that if hardware is old, that means it's unmaintained, or that if it's old, it can't have SSDs? Neither of those things are typically inferred from age.
I still maintain old servers, and even my Amiga server has an SSD.
If they're running hardware that old, and it's causing them software compatibility problems, then we can infer that their infrastructure is unmaintained, because the cost of moving to newer hardware is so low that the cost of newer hardware could not plausibly be the reason they haven't moved to new hardware. There's dirt cheap used server hardware that would be substantially faster, cheaper to operate, and not have software compatibility issues like this. Money can't be preventing them from using newer hardware.
We don't know for sure the servers don't have SSDs, but we do know that back in the days of server hardware that didn't support SSE4.1, SSDs had not yet displaced hard drives for mainstream storage, so it's likely that servers that old didn't originally ship with SSDs. It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
A server at that age is also going to be harder to repair when something dies, and it's due for something to die. If they lose a PSU it might be cheaper to replace the whole system with something a bit less old. Other components they'd have to rely on replacing with something used, from a different manufacturer than the original, or use a newer generation component and hope it's backwards compatible. Hence why I said using hardware that old would imply their infrastructure is fragile.
But all of this is still just speculation because nobody involved with F-Droid has actually explained what specific hardware they're using, or why. So I'm still not convinced that the possibility of a misconfigured hypervisor has been ruled out.
I have computers from the early 2000s that now have SSDs in them. You can get cheap adapters to use SATA and CompactFlash storage on old machines.
There are some more possible virtues except of performance and probably-reliability.
I work in the refurb division of an ewaste recycling company[0]. $300 will get you a very nice used Thinkpad or Dell Latitude. They might even get by with some ~$50 mini desktops.
[0] https://www.ebay.com/str/evolutionecycling
It will have Intel ME which makes the whole open-source ideology... compromised?
If they're relying on binaries from Google, then it's already compromised.
there are a handful of vendors that will sell you an intel chip with the me disabled, as well as arm vendors that ship boards without an me-equivalent at all
the point of my post still stands
Do I need to be the US Military for that?
Intel ME is not a feature for user, it is intended to control any modern CPU except the ones coming to US Army/Navy. It is needed to make Stuxnet-class attacks. The latest chip with possibiliy to have the ME provenly disabled is the 3rd gen.
Someone send these people a Slimbook.
it's insane, i would give them my old xeon haswell machine for free, but the shipping cost is likely more than the cost of the machine itself.
Yes, SSE4.1 and SSSE3 have been introduced in ~2006. The F-Droid build server still uses that to build modern and some of the most popular FOSS apps.
Known for ages...another issue that's been spoken about but instead of resolving the F-Droid team decided to brush it under the rug and misdirect people instead of addressing it.
"Overall, this case study highlights how F-Droid’s inclusion policy ultimately harms end users by forcing app developers to adopt potentially decrepit development tools and build processes in service of its regnant FOSS ideology."
Funny how this article is continually proved accurate and poignant: https://privsec.dev/posts/android/f-droid-security-issues/
A shitton of people, not to mention including all F-Droid users, would take FOSS ideology over new fangled bloated "non-decrepit" development tools _any day_.
But in any case, this is false dichotomy, and likely exaggerated one to begin with.
I think it's extremely useful to have more strict requirements on how programs are built, to make sure that developers don't do stupid things that makes code harder for others to compile.
The tools in question in OP should be easy to build from source and not rely on the host's architecture, to be usable on platforms like ARM and RISCV. It's clear that in the android ecosystem, people don't care, so F-Droid can't do miracles (the java/gradle ecosystem is just really bad at this), but this would not happen if the build tools had proper build recipes themselves.
As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
> As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
Yup, same here! The story is as old as time, and the examples are plentiful. First Slashdot, then Reddit, then now GitHub, all became far-far-far slower and less usable, once they've been "improved" by the folk engaging in the resume-driven development:
Why is GitHub UI getting slower? - https://news.ycombinator.com/item?id=44799861 - Aug 2025 (115 comments)
I am, too, as a user, quite pleased that F-Droid is keeping it cool and reliable for the actual users.
Do I get it correctly, that they run their build infrastructure on at least 15 year old hardware?
Is it the CPUs or the compilers? Or possibly a CI/CD runner that has to run something that can’t run on these CPUs?
There are even some "Unknown problem" on IzzyOnDroid repo for app publishing, even ensuring reproducible build, izzy says >>Not necessarily "your fault" – baseline often has such issues: https://github.com/CompassMB/MBCompass/issues/90
Seems like he is talking about the developer being responsible for that also!
IzzyOnDroid can publish updates even if it's not reproducible, this is not an "app publishing" issue at all. IzzyOnDroid can deal with AGP 8.12 fine.
Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"
Non-hacker here. The title says "modern". I don't need modern, have a 10 year old phone, can I still get the occasional simple app from F-Droid?
I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.
Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.
QEMU static on linux supports automatic emulating of missing instructions. Depending on details that I haven't figured out it can be a lot slower running this way or close enough to native. I have got that working, but it was a pain and I don't remember what was needed (most of the work was done by someone else, but I helped)
This is super annoying how SW vendors forcefully deprecate good enough hardware.
Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.
The problem is that your "good enough" is someone else's "woefully inadequate", and sticking to the old feature sets is going to make the software horribly inefficient - or just plain unusable.
I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.
At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?
Ah, com'on, spare me from these strawman arguments. Good enought is good enough. If F-Droid wasn't worried about that, you definitely have no reasons to do that for them.
"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...
But it isn't good enough. SIMD provides measurable improvements to some people's code. To those people what we had before isn't good enough. Sure for the majority SIMD provides no noticeable benefit and so what we had before is good enough, but that isn't everybody.
OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.
There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.
[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...
[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.
It's not only your own software, but also its dependencies. The link above is for glibc, and is specifically addressing incompatibliy issues between different software. Unless you are going to compile your own glibc (for example, doing Linux From Scratch), you're going to depend on features shipped by someone else. In this case that means either baseline, with no SIMD support at all, or level A, which includes SSE4.1. It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.
> Unless you are going to compile your own glibc (for example, doing Linux From Scratch),
It's not that hard to use gentoo.
The F-Drois builds have been slow for years and with how old their servers apparently are that isn't even surprising in retrospective.
I don't know how much servers are they using or server specs besides ancient Opterons, but how is this even an issue in 2025?
On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.
What are we missing here, besides that build farm was left to decay?
Either they want to run on ideologically pure hardware too, without pesky management bits in it (or even indeed UEFI), or they are just "it used to work perfectly" guys.
In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.
Well if you wanted to compromise F-Droid you could target their build server's ME or a cloud vm's hypervisor.
To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.
The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.
There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.
F-Droid likely has upgrade options even in the all-open scenario.
I agree with you. Unfortunately usually, the simplest explanation is often the truth, so they just probably ignored this issue, until it surfaced up.
In other words,
> they are just "it used to work perfectly" guys.
wtf they cannot be still running opterons. it was to be that they are using qemu with g3 as a cpu profile.. right?
I think this might give Google some ideas...
Note: the underlying blame here fundamentally belongs to whoever built AGP / Gradle with non-universal flags, then distributed it.
It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.
Exactly. Everything should be compiled to target i386.
/s (should be obvious but probably not for this audience)
control the universe
Guess what the company behind Android wants to do...
On the other hand, we have "personal" data centers for AI and mining farms for crypto.
Put another way, Google is requiring you to have 65nm Intel chips. 2009-ish.
I don't get the issue, binary target is completely independent from host target on all but the most basic setups
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary
Welp there goes my plans on savaging an old laptop to build my android apps.
Requiring (supposedly) universally available CPU instructions is one thing. Starting to require it in a minor version update (8.11.1 -> 8.12.0) is a whole different thing. What the heck happened to semantic versioning? We can't even trust patch updates anymore these days. The version numbers might as well be git commit IDs.
Can't cross compilation help for that? The CPU compiling doesn't need to match the target.
It's not the target that is now requiring new instructions, but one of the components in the build tools.
I see.
[dead]
[dead]
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support.
Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s
Are there any X86 tablets with Android ?
There are very few 17+ years old build servers at this point. Or laptops and desktops for that matter.
Encourage your users to use Obtainium instead. Cut out the middleman.
https://github.com/ImranR98/Obtainium
Half the point is that I trust this middleman more than the app devs. When app developers turn evil ( https://news.ycombinator.com/item?id=38505229 ), I explicitly want someone reviewing things and blocking software that works against my interests before it gets to me.
Obtainium assumes that the app developer is a trustworthy entity, when the reality behind the mobile ecosystem being as fucked up as it is primarily comes from the app developer. (Due to bad incentives made by mobile platform makers, mainly Apple.)
You need a middleman in place in case the app developer goes bad.
this seems to be a general app finder and tracker. useful, but entirely different from what f-droid does, namely verify that apps are actually Free Software or Open Source and buildable from source.
I have it installed. But the only thing I get updates for is Obtainium itself. There's no catalogue of apps, so I haven't installed anything via Obtainium.
Here's a catalog of apps from the Obtainium wiki.
https://apps.obtainium.imranr.dev/
They put the disclaimer on top that this list is not meant as an app store or catalog. It's meant for apps with somewhat complex requirements for adding to Obtainium. But it serves well as a catalog since most of the major open source apps are listed.
I would uninstall. Author and app seem sketchy.
Will you elaborate?
Try Discoverium
How is this not another middleman (with a political banner in its README no less)?
Wow. That banner slipped by me on first read. Thanks for pointing it out. I tried to go to the dev's webpage, and I needed a VPN to access it. If he actually believed what he said, he wouldn't block IPs, he'd attempt to educate. Seems like bad-faith xenophobia role-playing as compassion.
Not sure what you found, but some of the "interesting links" on his website suggest a conspiracy theorist.
If I was the state, conspiring against the people, the first thing I'd do would be to program the masses to ridicule the intelligent ones who spot the signs and theorise about a conspiracy - I'd teach the masses to point and laugh at wacky "conspiracy theorists"
What I would do is spread obviously ridiculous theories in order to distract attention from the real problem.
> If he actually believed what he said
Believe at what? A fact that is being actively documented in Gaza by NGOs and corroborated by numerous news agencies internationally?
This is all comming across as dishonest (specially when looking at your own homepage)
I think it acts more as an rss feed reader rather than building and hosting apps on it's own.
At this point it is not political, the banner mention a fact and a tragedy and link for donations to reputable NGOs.
I know this is off-topic, as is this whole sub-thread by now. But is there a way to read the news as the Israelis do? I sometimes read rt.com (even though I need a vpn for that, somehow my government feels I'm not allowed to study this??), it helps me understand how Russian media presents news to their citizens. Is there anything like that for Israeli news?
Our Dutch news (and I think most EU news) is pretty much presenting us with the view that Israel has lost it (stories about young men searching for food being shot in the genitals for fun and such [0]), so I'm very curious how their government presents things to its civilians.
[0] https://nos.nl/nieuwsuur/artikel/2575933-beschietingen-bij-z...
Would you prefer English content? You could try ynetnews.com, which I believe is translated from Ynet's Hebrew articles, for a very mainstream Israeli source.
There are also fully English sources like Times of Israel, though though it has sort of an international audience, not only Israelis.
Thanx! (Yes English is best!)
>> This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
What an entitled conclusion.