_______ __ _______ | |
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----. | |
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --| | |
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____| | |
on Gopher (inofficial) | |
Visit Hacker News on the Web | |
COMMENT PAGE FOR: | |
Swift's native Clocks are inefficient | |
adsharma wrote 9 hours 17 min ago: | |
clock_gettime_nsec_np() seems interesting in that it returns a u64. | |
I proposed something similar for Linux circa 2012. The patch got lost | |
in some other unrelated discussion and I didn't pursue it. | |
struct timeval is a holdover from the 32 bit era. When everyone is | |
using 64 bit machines, we should be able to get this data by reading | |
one u64 from a shared page. | |
adsharma wrote 9 hours 15 min ago: | |
[1]: https://lkml.org/lkml/2011/12/12/438 | |
diebeforei485 wrote 9 hours 23 min ago: | |
It's required of native apps because native apps are full of API's that | |
collect and sell user data. That's why. | |
loeg wrote 11 hours 24 min ago: | |
Wow, hundreds of milliseconds is a lot worse than I'd expect. I'm not | |
shocked that it's slower than something like plain `rdtsc` (single | |
digit nanoseconds?) but that excuses maybe microseconds of overhead -- | |
not milliseconds and certainly not hundreds of milliseconds. | |
layer8 wrote 10 hours 50 min ago: | |
Thatâs for a million iterations, so really nanoseconds. | |
gok wrote 10 hours 59 min ago: | |
It's hundreds of milliseconds to do a a million iterations. A single | |
time check is hundreds of nanoseconds. | |
loeg wrote 10 hours 45 min ago: | |
Oh, thanks. The table was unlabeled and I missed that in the text. | |
Hundreds of nanos isn't great but it's certainly better than | |
milliseconds. | |
Kallikrates wrote 12 hours 22 min ago: | |
[1]: https://github.com/apple/swift/pull/73429 | |
simscitizen wrote 13 hours 16 min ago: | |
Just use clock_gettime with whatever clock you want. Thereâs also a | |
np (non-POSIX) suffixed variant that returns the timestamp in | |
nanoseconds. | |
beeboobaa3 wrote 13 hours 17 min ago: | |
Still hilarious how apple goes about their "system security". | |
Instead of actually implementing security in the kernel they just kinda | |
prevent you from distributing an app that may call that functionality. | |
Because that way they can still let their buddies abuse it without | |
appearing too biased (by e.g. having a whitelist on the device). | |
This technical failing probably, partially, explains why they are so | |
against allowing sideloading. That, and they're scared of losing their | |
cash cow of course. | |
dang wrote 9 hours 24 min ago: | |
We detached this subthread from [1] . | |
[1]: https://news.ycombinator.com/item?id=40274188 | |
NotPractical wrote 11 hours 46 min ago: | |
> This technical failing probably, partially, explains why they are | |
so against allowing sideloading. | |
This occurred to me the other day. I've always laughed at the idea | |
that Apple blocks sideloading for security purposes, but if the first | |
line of defense is and always has been security through obscurity + | |
manual App Store review (>= 2.0) on iOS, it's very possible that | |
sideloading could cause problems. iOS didn't even have an App Store | |
in release 1.0, meanwhile the Android security model has taken into | |
account sideloaded apps since the very beginning [1]: | |
> Android is designed to be open. [...] Securing an open platform | |
requires a strong security architecture and rigorous security | |
programs. Android was designed with multilayered security that's | |
flexible enough to support an open platform while still protecting | |
all users of the platform. [1] | |
Edit: Language revised to clarify that I'm poking fun of the idea and | |
not the one who believes it. | |
[1]: https://source.android.com/docs/security/overview | |
saagarjha wrote 3 hours 38 min ago: | |
Android and iOS have largely the same threat model with it comes to | |
platform security. That is, app review mostly does not exist and | |
the OS itself must protect the user. | |
GeekyBear wrote 10 hours 22 min ago: | |
> the Android security model has taken into account sideloaded apps | |
since the very beginning | |
Counterpoint: tech websites have literally warned users that they | |
need to be wary of installing apps from inside Google's walled | |
garden. | |
> With malicious apps infiltrating Play on a regular, often weekly, | |
basis, thereâs currently little indication the malicious Android | |
app scourge will be abated. That means itâs up to individual end | |
users to steer clear of apps like Joker. The best advice is to be | |
extremely conservative in the apps that get installed in the first | |
place. A good guiding principle is to choose apps that serve a true | |
purpose and, when possible, choose developers who are known | |
entities. Installed apps that havenât been used in the past month | |
should be removed unless thereâs a good reason to keep them | |
around [1] "You should not trust apps from inside the walled | |
garden" is not a sign of a superior security model. | |
[1]: https://arstechnica.com/information-technology/2020/09/jok... | |
NotPractical wrote 9 hours 52 min ago: | |
> Counterpoint: tech websites have literally warned users that | |
they need to be wary of installing apps from inside Google's | |
walled garden. | |
This is not a counterpoint to what I was saying. I'm talking | |
about sideloaded apps, not apps from Google Play. I agree that | |
Google should work to improve their app vetting process, but | |
that's a separate issue entirely, and one I'm not personally | |
interested in. | |
GeekyBear wrote 9 hours 26 min ago: | |
If your security model is so weak that you can't keep malware | |
out of the inside of your walled garden, the situation | |
certainly isn't going to improve after you remove the Play | |
store's app vetting process as a factor. | |
NotPractical wrote 8 hours 54 min ago: | |
I avoided making a claim regarding the relative "security | |
level" of Android vs. iOS because it's not easy to precisely | |
define what that means. All I was saying was that Android's | |
security model explicitly accommodates openness. If your | |
standard for a "strong" security model excludes openness | |
entirely, that's fair I suppose, but I personally find it | |
unacceptable. Supposing we keep openness as a factor for its | |
own sake, I'm not sure how you can improve much on Android's | |
model. | |
This discussion seems to be headed in an ideological | |
direction rather than a technical one, and I'm not very | |
interested in that. | |
GeekyBear wrote 8 hours 13 min ago: | |
If your point of view is that you value the ability to | |
execute code from random places on the internet more than | |
security, perhaps that is the point you should have been | |
making from the start. | |
However, iOS makes the security trade off in the other | |
direction. | |
All an app's executable code must go through the app | |
vetting process, and additional executable code cannot be | |
added to the app without the app going through the app | |
vetting process all over again. | |
In contrast, Google has been unable to quash malware like | |
Joker from inside the Play store because the malware gets | |
downloaded and installed after the app makes it through the | |
app vetting process and lands on a user's device. | |
> Known as Joker, this family of malicious apps has been | |
attacking Android users since late 2016 and more recently | |
has become one of the most common Android threats... | |
One of the keys to Jokerâs success is its roundabout | |
attack. The apps are knockoffs of legitimate apps and, when | |
downloaded from Play or a different market, contain no | |
malicious code other than a âdropper.â After a delay of | |
hours or even days, the dropper, which is heavily | |
obfuscated and contains just a few lines of code, downloads | |
a malicious component and drops it into the app. [1] iOS | |
not having constant issues with malware like Joker inside | |
their app store has nothing to do with "security through | |
obscurity" and everything to do with making a different set | |
of trade offs when setting up the security model. | |
[1]: https://arstechnica.com/information-technology/202... | |
saagarjha wrote 3 hours 42 min ago: | |
Downloading executable code is irrelevant; itâs easy to | |
alter app behavior dynamically on either platform. | |
spacedcowboy wrote 11 hours 5 min ago: | |
I'm not claiming that Apple is perfect, but I think comparing to | |
Android, in terms of malware, security updates, and privacy, it | |
comes out looking pretty good. | |
realusername wrote 9 hours 53 min ago: | |
Both look pretty similar to me, both in terms of policies and | |
outcome. | |
While iOS has longer device support, it's also way less modular | |
and updates of system components will typically take longer to | |
reach users than Android, so I'd say both have their issues | |
there. | |
beeboobaa3 wrote 11 hours 2 min ago: | |
Got some sources to cite, or is this the typical apple fanboyism | |
of "android bad"? | |
I've used android for years, never ran into any malware. I've | |
also developed for android and ios. Writing malware is largely | |
impossible due to the functional permission system, at least it's | |
much, much harder than the other operating systems. Apple just | |
pretends it's immune to malware because of the manual reviews and | |
static analysis performed by the store. It's also why they're | |
terrified of letting people ship their own interpreters like | |
javascript engines. | |
Aloisius wrote 9 hours 35 min ago: | |
A bit old but, [1] One might argue that Android is targeted | |
more than iPhone because of its larger userbase which certainly | |
may contribute to it, but then MacOS which has a fraction of | |
the userbase is more targeted than iOS - that makes the case | |
that sideloading or lax app store reviews really are at least | |
partly to blame. | |
Given much of the malware seems to be apps that trick users | |
into granting permissions by masquerading as a legitimate app | |
or pirated software, it's not really too hard to believe that | |
Apple's app store with their draconian review process and no | |
sideloading might be a more difficult target. | |
[1]: https://www.pandasecurity.com/en/mediacenter/android-m... | |
beeboobaa3 wrote 8 hours 58 min ago: | |
Obviously a strict walled garden keeps out bad actors. The | |
question is: Is it worth it? I say no. | |
People deserve to be trusted with the responsibility of | |
making a choice. We are allowing everyone to buy power tools | |
that can cause severe injuries when mishandled. No one blinks | |
an eye. Just like we allow that to happen, we should allow | |
people to use their devices in the way that they desire. If | |
this means some malware can exist then I consider this to be | |
acceptable. | |
In the meantime system security can always be improved still. | |
Aloisius wrote 7 hours 24 min ago: | |
Yes, freedom to do what you want with your device is a | |
great ideal. | |
Yet I still don't want to have to fix my mom's phone | |
because its loaded with malware or worse, malware draining | |
her bank account. | |
threatofrain wrote 11 hours 13 min ago: | |
Is there a reputation of a security difference between Android and | |
iOS? And in what direction does the badness lean? | |
beeboobaa3 wrote 11 hours 0 min ago: | |
There is a reputation of Apple being more secure, but it's | |
largely unfounded. It just looks that way because the ecosystem | |
is completely locked down and software isn't allowed to exist | |
without apple's stamp of approval. | |
kbolino wrote 10 hours 48 min ago: | |
Apple drove genuine security improvements in mobile hardware | |
well before Android, including dedicated security chips and | |
encrypted storage. The gap has been closed for a few years now, | |
though, so the reputation is not so much "unfounded" as "out of | |
date". | |
cyberax wrote 5 hours 23 min ago: | |
Like? | |
beeboobaa3 wrote 9 hours 56 min ago: | |
You're not talking about security that protects end users | |
against malware. You're talking about "security" that | |
protects the device against "tampering", i.e. the owner using | |
it in a way apple does not approve of. | |
Apple's "security improvements" have always been about | |
protecting their walled garden first and foremost. | |
kbolino wrote 8 hours 57 min ago: | |
A mobile device, in most users' hands: | |
- Stores their security credentials for critical sites | |
(banks, HR/payroll, stores, govt services, etc.) | |
- Even if not, has unfettered access to their primary email | |
account, which means it can autonomously initiate a | |
password reset for nearly any site | |
- Is their primary 2FA mechanism, which means it can | |
autonomously confirm a password reset for nearly any site | |
That's an immense amount of risk, both from apps running on | |
the device, and from the device getting stolen. Both of the | |
measures I mentioned are directly relevant to these kinds | |
of threats. And, as I already said, Android has adopted | |
these same security measures as well. | |
beeboobaa3 wrote 8 hours 52 min ago: | |
So the same as any computer since online banking and | |
email were invented. This isn't some new development. You | |
should stop trying to nanny people. | |
kbolino wrote 8 hours 49 min ago: | |
I have no idea what you are trying to say in the | |
context of the thread. Hardware security is important | |
for all of that and security measures have to evolve | |
over time. | |
fingerlocks wrote 9 hours 31 min ago: | |
This just isnât true. We have multiple bricked android | |
devices from bootloader infected malware downloaded | |
directly from the Play store. Nothing like that has ever | |
happened on iOS. | |
beeboobaa3 wrote 8 hours 53 min ago: | |
The only thing this may prove is that Apple's app store | |
review is more strict. | |
asveikau wrote 12 hours 30 min ago: | |
The hilarious thing is how people justify Apple's bugs with a | |
security concern. | |
Just squinting at the stack trace from the article, my intuition is | |
that someone at Apple added a bunch of nice looking object-oriented | |
stuff without regard for overhead. So a call to get a single integer | |
from the kernel, namely the time, results in lots of objects being | |
created on the heap and tons of "validation" going on. Then somebody | |
on hacker news says this is all for your own good. | |
sholladay wrote 13 hours 23 min ago: | |
Maybe unrelated, but Iâve noticed that setting a timer on iOS drains | |
my battery more than I would expect, and the phone gets warm after a | |
while. Itâs just bad enough that if the timer is longer than 15 | |
minutes, I often use an alarm instead of a timer. Not something Iâve | |
experienced on Android. | |
saagarjha wrote 3 hours 41 min ago: | |
Have you tried profiling the device to see why? | |
whywhywhywhy wrote 13 hours 3 min ago: | |
Lot of glitches in timers and alarms on recent iOS, sometimes they | |
donât even fire. Extremely poor setting like a timer for cooking, | |
checking your phone and there just isnât a timer running anymore | |
and your left wondering how far itâs overshot. | |
15 Pro so definitely not an old phone new software issue. | |
feverzsj wrote 13 hours 43 min ago: | |
I know swift has poor performance, but not expect they did it | |
purposely. | |
jepler wrote 14 hours 4 min ago: | |
I was curious how linux's clock_gettime compared. I wrote a simple | |
program that tried all the clock types documented in my manpage: [1] My | |
two handy systems were an i5-1235U running 6.1.0-20-amd64 and a Ryzen 7 | |
3700X also running 6.1.0-20-amd64. The fastest method was 3.7ns/s call | |
on the i5 and 4ns/call on the Ryzen (REALTIME_COARSE and | |
MONOTONIC_COARSE were about the same). If a "non-coarse" timestamp is | |
required, the time increases to about 20ns/call on ryzen, 12ns on i5. | |
(realtime, tai, monotonic, boottime). | |
On the i5, if I force the benchmark to run on an efficiency core with | |
taskset, times increase to 6.4ns and 19ns. | |
[1]: https://gist.github.com/jepler/e37be8fc27d6fb77eb6e9746014db92... | |
jeffbee wrote 13 hours 45 min ago: | |
You can knock almost a third off that fastest time by building with | |
`-static`. In something that is completely trivial like reading the | |
clock via vDSO the indirect call overhead of dynamic libc linking | |
becomes huge. `-static` eliminates one level of indirect calls. The | |
indirect vDSO call remains, though. | |
% ./clocktest | rg MONOTONIC_COARSE | |
MONOTONIC_COARSE : 2.2ns percall | |
rfmoz wrote 14 hours 7 min ago: | |
OSX clock_gettime() [0] offers CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW, | |
but not CLOCK_UPTIME, only CLOCK_UPTIME_RAW. | |
Maybe someone knows why? On FreeBSD it is available [2]. | |
[0]: [1] | |
[1]: https://www.manpagez.com/man/3/clock_gettime_nsec_np/ | |
[2]: https://man.freebsd.org/cgi/man.cgi?query=clock_gettime | |
loeg wrote 11 hours 10 min ago: | |
Are you talking about CLOCK_UPTIME_FAST on FreeBSD? It does not have | |
CLOCK_UPTIME_RAW. | |
In FreeBSD, the distinction lives here: | |
[1]: http://fxr.watson.org/fxr/source/kern/kern_time.c#L352 | |
Shrezzing wrote 14 hours 10 min ago: | |
This is almost certainly intentional, and is very similar to the way | |
web browsers mitigate the Spectre vulnerability[1]. Your processor | |
(almost certainly) does some branch prediction to improve efficiency. | |
If an application developer reliably knows the exact time, they can | |
craft an application which jumps to another application's execution | |
path, granting them complete access to its internal workings. | |
To mitigate this threat, javascript engine developers simply added a | |
random fuzzy delay to all of the precision timing techniques. Swift's | |
large volume of calls to unrequired methods is, almost certainly, | |
Apple's implementation of this mitigation. | |
[1]: https://en.wikipedia.org/wiki/Spectre_(security_vulnerability) | |
Veserv wrote 9 hours 15 min ago: | |
No, that is nonsense. | |
A competent organization would not make the function call take longer | |
by a random amount of time. You would just do it normally then add | |
the random fudge factor to the normal result. That is not only more | |
efficient, it also allows more fine-tuned control, the randomization | |
is much more stable, and it is just plain easier to implement. | |
Though I guess I should not put it past them to do something | |
incompetent given that they either implemented their native clocks | |
poorly as the article says, or they incompetently implemented a | |
Spectre mitigation as you theorize. | |
saagarjha wrote 9 hours 25 min ago: | |
This is not true in the slightest, and I feel that you might be | |
misunderstanding how these attacks work. Spectre does not allow you | |
to control execution of another process. It does not touch any | |
architecturally visible state; it works via side channels. This means | |
all it can do is leak information. The mitigation for Spectre in the | |
browser adds a fuzzy delay (which is not considered to be very | |
strong, fwiw). Just making a slower timer does not actually mitigate | |
anything. And if you look at the code (it's all open source!) you can | |
see that none of it deals with this mitigation, it's all just normal | |
stuff that adds overhead. These attacks are powerful but they are not | |
magic where knowing the exact time gives you voodoo access to | |
everything. | |
vlovich123 wrote 10 hours 13 min ago: | |
This would have to be for Meltdown not Spectre. Spectre is in process | |
and meltdown is cross-process. In process would be pointless for a | |
language like swift. | |
And itâs a weird mitigation because Meltdown afaik has been | |
mitigated on other OSes without banning high res timers. | |
The nail in the coffin is that itâs unlikely about security is | |
Date and clock_get_time are accessible and an order of magnitude | |
faster. | |
This seems a more likely scenario of poorly profiled abstraction | |
layers adding features without measuring the performance. | |
stefan_ wrote 10 hours 17 min ago: | |
Literally one page into the article there is the full stack trace | |
that makes abundantly clear there is no such thing going on, they | |
just added a bunch of overhead. | |
That's despite OSX having a vDSO style mechanism for it: | |
[1]: https://github.com/opensource-apple/xnu/blob/master/libsysca... | |
beeboobaa3 wrote 13 hours 18 min ago: | |
Have to protect those pesky application developers from knowing the | |
time so they can write correct software. | |
It makes sense for a web browser. Not for something like Swift. | |
vlovich123 wrote 10 hours 10 min ago: | |
No, this is pretty clearly just a bug / poor design. Mistakes | |
happen. | |
beeboobaa3 wrote 9 hours 55 min ago: | |
Probably but I'm just responding to GP who implied that Apple, in | |
all its infinite wisdom, did this on purpose. | |
lxgr wrote 14 hours 5 min ago: | |
Nothing prevents applications from just calling the underlying | |
methods mentioned in the article, so that canât be it. The author | |
even benchmarked these! | |
Someone wrote 13 hours 57 min ago: | |
Nothing? FTA: âThe downside to calling mach_absolute_time | |
directly, though, is that itâs on Appleâs ânaughtyâ li… | |
â apparently itâs been abused for device fingerprinting, so | |
Apple require you to beg for special permission if you want to use | |
itâ | |
vlovich123 wrote 10 hours 11 min ago: | |
but date and clock_gettime are still accessible and not much more | |
overhead than the Mach API call. Additionally as I mention in | |
another comment, this would have to be about Meltdown, not | |
Spectre, and Meltdown is mitigated in the kernel through other | |
techniques without sacrificing timers. | |
cvwright wrote 12 hours 16 min ago: | |
All of the new privacy declarations are silly, but this one is | |
especially ridiculous. | |
I'm pretty sure I can trigger a hit to the naughty API just by | |
updating a @Published var in an ObservedObject. For those | |
unfamiliar with SwiftUI, this is the most basic way to tell the | |
system that your model data has changed and thus it needs to | |
re-render the view. Pretty much every non-trivial SwiftUI app | |
will need to do this. | |
sgerenser wrote 13 hours 16 min ago: | |
All the other methods "above" mach_absolute_time are still | |
allowed though, including clock_gettime_nsec_np that's only ~2x | |
slower than mach_absolute_time. While the Swift clock is ~40x | |
slower than mach_absolute_time. I don't see how intentional | |
slowdown for fingerprinting protection can be the cause. | |
kevin_thibedeau wrote 10 hours 33 min ago: | |
Someone took inspiration from FizzBuzzEnterpriseEdition[1] and | |
made their integer query API future proof. | |
[1]: https://github.com/EnterpriseQualityCoding/FizzBuzzEnt... | |
asow92 wrote 13 hours 21 min ago: | |
It isn't difficult to be granted this permission. All an app | |
needs to do is supply a reason defined in [1] as to why it's | |
being used in the app's bundled PrivacyInfo.xcprivacy file, which | |
could be disingenuous. | |
[1]: https://developer.apple.com/documentation/bundleresource... | |
darby_eight wrote 13 hours 16 min ago: | |
It may not be difficult, but it's an additional layer of | |
requirement. Defense in depth baby! | |
Someone wrote 10 hours 11 min ago: | |
In addition, if you get caught lying about this, your app may | |
be nuked and your developer account terminated. May not be a | |
big hurdle, but definitely can hurt if you have many users. | |
fathyb wrote 14 hours 6 min ago: | |
If this was intentional, shouldn't it also affect | |
`mach_absolute_time` which is used by the standard libraries of most | |
languages and accessible to Swift? | |
Also note you can get precise JavaScript measurements (and threading, | |
eg. using pthreads and Emscripten) by adding some headers: | |
[1]: https://developer.mozilla.org/en-US/docs/Web/API/Window/cros... | |
Shrezzing wrote 13 hours 55 min ago: | |
> Also note you can get precise JavaScript measurements (and | |
threading) by adding some headers | |
Though you can access these techniques now, in the weeks after | |
Spectre attacks were discovered, the browsers all consolidated on | |
"make timing less accurate across the board" as an immediate-term | |
fix[1]. All browsers now give automatic access to imprecise timing | |
by default, but have some technique to opt-in for near-precise | |
timing. | |
Similarly, Swift has SuspendingClock and ContinuousClock, which you | |
can use without informing Apple. Meanwhile mach_absolute_time & | |
similarly precise timing methods require developers to disclose the | |
reasons for its use before Apple will approve your app on the | |
store[2]. [1] | |
[1]: https://blog.mozilla.org/security/2018/01/03/mitigations-l... | |
[2]: https://developer.apple.com/documentation/kernel/1462446-m... | |
fathyb wrote 13 hours 33 min ago: | |
That makes a lot of sense, thank you! | |
vlovich123 wrote 10 hours 6 min ago: | |
No it doesnât. Higher performance APIs like Date and | |
clock_gettime are still available and not specially privileged | |
and 40x faster. This looks pretty clearly like a bug. | |
Spectre mitigations also are really silly here because as a | |
swift app you already have full access to all in-process | |
memory. It would have to be about meltdown but meltdown is | |
prevented through other techniques. | |
user2342 wrote 14 hours 16 min ago: | |
I'm not fluent in Swift and async, but the line: | |
for try await byte in bytes { ... } | |
for me reads like the time/delta is determined for every single byte | |
received over the network. I.e. millions of times for megabytes sent. | |
Isn't that a point for optimization or do I misunderstand the semantics | |
of the code? | |
metaltyphoon wrote 5 hours 44 min ago: | |
Iâm not sure about Swift, buy in C# and async method doesnât have | |
to be completed asynchronously. For example, when reading from files, | |
a buffer will be first read asynchronous then subsequent calls will | |
be completed synchronously until the buffer needs to be âfilledâ | |
again. So it feels like most languages can do these optimizations | |
again. | |
saagarjha wrote 3 hours 48 min ago: | |
This is what Swift does. | |
samatman wrote 12 hours 31 min ago: | |
The code, as the author makes clear, is an MWE. It provides a brief | |
framework for benchmarking the behavior of the clocks. It's not | |
intended to illustrate how to efficiently perform the task it's meant | |
to resemble. | |
spenczar5 wrote 11 hours 36 min ago: | |
But it seems consequential. If the time were sampled every | |
kilobyte, the code would be 1,000 times faster - which is better | |
than the proposed use of other time functions. | |
At that point, even these slow methods are using about 0.5ms per | |
million bytes, so it should be good up to gigabit speeds. | |
If thatâs not fast enough, then sample every million bytes. Or, | |
if the complexity is worth it, sample in an adaptive fashion. | |
ajross wrote 14 hours 9 min ago: | |
Yeah, this is horrifying from a performance design perspective. But | |
in this case you'd still expect that the "current time" retrieval[1] | |
to be small relative to all the other async overhead (context | |
switching for every byte!), and apparently it isn't? | |
[1] On x86 linux, it's just a quick call into the vdso that reads the | |
TSC and some calibration data, dozen cycles or so. | |
marcosdumay wrote 13 hours 38 min ago: | |
The stream reader userspace libraries are very well optimized for | |
handling that kind of "dumb" usage that should obviously create | |
problems. (That's one of the reasons Linux expects you to use glibc | |
instead of making a syscall directly.) | |
But I imagine the time reading ones aren't as much optimized. | |
People normally do not call them all the time. | |
saagarjha wrote 3 hours 47 min ago: | |
They look very similar on macOS. | |
jerf wrote 13 hours 40 min ago: | |
Note the end of the article acknowledges this, so this is clearly a | |
deliberate part of the constructed example to make a particular | |
point and not an oversight by the author. But it is helpful to | |
highlight this point, since it is certainly a live mistake I've | |
seen in real code before. It's an interesting test of how rich | |
one's cost model for running code is. | |
xyst wrote 14 hours 38 min ago: | |
Post mentions this Apple doc, [1] , which states it can potentially be | |
used to fingerprint a device? | |
How can this API be used to fingerprint devices? Itâs just getting | |
the current time. | |
My best guess, you can infer a users time zone? Thus get a very | |
general/broad area of where this user lives (USA vs EU; or US-EST vs | |
US-PST) | |
Maybe I should just set my time to UTC on all devices | |
[1]: https://developer.apple.com/documentation/kernel/1462446-mach_... | |
singron wrote 14 hours 23 min ago: | |
The offset from epoch time is probably unique per device per boot, | |
and it only drifts one second per second while the device is | |
suspended. | |
You can get the time zone from less naughty APIs, and that has way | |
fewer bits of entropy. | |
twoodfin wrote 14 hours 24 min ago: | |
The problem is itâs getting the current time with relatively high | |
precision, which is the same reason a developer would prefer it for | |
non-nefarious uses. | |
Once you have a high-precision timer, there are all sorts of aspects | |
of the userâs system you can fingerprint by measuring how long some | |
particular API dependent on device performance and/or state takes to | |
execute. | |
Platform vendors long ago figured out not to hand out the list of | |
available fonts, but itâs a couple orders of magnitude harder to be | |
sure switching some text from Menlo to Helvetica doesnât leak a | |
fractional bit of information via device-dependent timing. | |
EDIT: Others noted itâs actually ticks since startup, which is | |
probably good for a few bits all on its own if you are tracking users | |
in close to real time. | |
Waterluvian wrote 14 hours 18 min ago: | |
Itâs amazing just how much we lose here, elsewhere, and in | |
browsers, because we have to worry about fingerprinting. | |
lesuorac wrote 10 hours 24 min ago: | |
It gets even more funny when you realize that devices & browsers | |
let you set data per device that you can just retrieve later ... | |
Why bother fingerprinting when you can just assign them an id and | |
retrieve it later. [1] [2] | |
[1]: https://developer.apple.com/documentation/devicecheck | |
[2]: https://engineering.deptagency.com/securely-storing-data... | |
[3]: https://developer.mozilla.org/en-US/docs/Web/API/Window/... | |
saagarjha wrote 3 hours 44 min ago: | |
That gives you two bits of data, which is designed to not be | |
easy to fingerprint with. | |
Aloisius wrote 9 hours 3 min ago: | |
That doesn't give you a fixed device ID like fingerprinting | |
does. | |
A fixed device ID survives even when an app is uninstalled and | |
reinstalled and is the same for unaffiliated apps. | |
interpol_p wrote 14 hours 25 min ago: | |
My understanding is this gets something like the system uptime? (I | |
may be reading the docs wrong). | |
In which case, it could be used as one of many signals in | |
fingerprinting a device, as you could distinguish a returning user by | |
checking their uptime against the time delta since the uptime at | |
their last visit. It's not perfect, but when combined with other | |
signals, might be helpful | |
lapcat wrote 14 hours 28 min ago: | |
mach_absolute_time is unrelated to clock time. It's basically the | |
number of CPU cycles since last boot, so it's more of an uptime | |
measure. | |
I suspect the fingerprinting aspect is more indirect: | |
mach_absolute_time is the most accurate way to measure small | |
differences, so if you're trying to measure subtle differences in | |
performance between different devices on some specific task, | |
mach_absolute_time would be the way to go. | |
VogonPoetry wrote 4 hours 7 min ago: | |
Consider N devices behind a NAT. They all make requests to a | |
service. | |
If the service can learn the individual but current values of | |
mach_absolute_time, then after a minimum of two requests you can | |
likely compute N and distinguish between each device making a | |
request. | |
This is possible because devices never reboot at exactly the same | |
time. | |
MBCook wrote 11 hours 32 min ago: | |
> It's basically the number of CPU cycles since last boot, so it's | |
more of an uptime measure | |
And thereâs the problem. Different devices have different | |
uptimes. If you can get not only the uptime but a very very | |
accurate version, youâve got a very strong fingerprint. | |
thealistra wrote 14 hours 19 min ago: | |
Yeah this is correct. Other comments seem misinformed. | |
You can fingerprint a device using this because you know the wall | |
clock difference and you know the previous cpu cycles. So you can | |
assume any device with appropriately more cpu cycles may be the | |
same device. | |
Weâre talking measurements taken from different apps using Google | |
or Facebook sdk. | |
simcop2387 wrote 14 hours 33 min ago: | |
The same way that you can do it from javascript I'd imagine. | |
Timezones and such are one data point but the skews and accuracy and | |
such are able to help you differentiate users too. | |
[1]: https://forums.whonix.org/t/javascript-time-fingerprinting/7... | |
koenneker wrote 14 hours 40 min ago: | |
Might this be a hamfisted reaction to timing attacks? | |
tialaramex wrote 14 hours 45 min ago: | |
In the Swift library documentation itself, hopefully a Swift person can | |
tell me: What is the significance of the list of Apple platforms given? | |
For example the Clock protocol shows iOS 16.0+ among others. | |
I can imagine that e.g. ContinuousClock is platform specific - any | |
particular system may or may not be able to present a clock which | |
exhibits change despite being asleep for a while and so to the extent | |
Apple claim Swift isn't an Apple-only language, ContinuousClock might | |
nevertheless have platform requirements. | |
But the protocol seems free from such a constraint. I can write this | |
protocol for some arbitrary hardware which has no concept of time, I | |
can't implement it, but I can easily write the protocol down, and yet | |
here it is iOS 16.0+ anyway. | |
pdpi wrote 14 hours 28 min ago: | |
According to their changelog[0], Clock was added to the standard | |
library with Swift 5.7, which shipped in 2022, at the same time as | |
iOS 16. It looks like static linking by default was approved[1] but | |
development stalled[2]. | |
I expect that it's as simple as that: It's supported on iOS 16+ | |
because it's dynamically linked by default, against a system-wide | |
version of the standard library. You can probably try to statically | |
link newer versions on old OS versions, or maybe ship a newer version | |
of the standard library and dynamically link against that, but I have | |
no idea how well those paths are supported. | |
0. [1] 1. [2] 2. | |
[1]: https://github.com/apple/swift/blob/main/CHANGELOG.md | |
[2]: https://github.com/apple/swift-evolution/blob/main/proposals... | |
[3]: https://github.com/apple/swift-package-manager/pull/3905 | |
rockbruno wrote 14 hours 29 min ago: | |
The standard library stopped being bundled with Swift apps when ABI | |
stability was achieved. They are now provided as dynamic libraries | |
alongside OS releases, so you can only use Swift features that match | |
the library version for a particular OS version. | |
beeboobaa3 wrote 12 hours 7 min ago: | |
Yikes. So after bundling their development tools with their | |
operating system they are now also bundling some language's stdlib | |
with the operating system? Gotta get them fingers in all of the | |
pies, I guess. | |
MBCook wrote 11 hours 36 min ago: | |
> they are now also bundling some language's stdlib with the | |
operating system | |
much like libc, isnât it? Apple writes tons of their own | |
software in Swift and the number keeps going up. Theyâre trying | |
to move more and more of the system to. Itâs going to be loaded | |
and in every system whether a user uses it or not. | |
No different from the objective-C runtime. | |
jackjeff wrote 11 hours 18 min ago: | |
Absolutely agree. | |
On Windows the equivalent would be MSVCRT which is sometimes | |
shipped with the OS and sometimes not (depending on the | |
versions involved). Sometimes you even need to worry about the | |
CRT dependency with higher level languages because their | |
âstandard librairiesâ depend on the CRT. | |
So if you see that being installed with Java or C# or Unity, | |
now you know why. | |
KerrAvon wrote 11 hours 54 min ago: | |
Every Unix and most Unixlikes have always done this. Itâs | |
standard practice in that world. | |
beeboobaa3 wrote 11 hours 53 min ago: | |
Which distro ships the go standard library? | |
Also unixes let the sysadmin install additional libraries. How | |
do I `apt install libswift2` on an iPhone? | |
saagarjha wrote 3 hours 45 min ago: | |
[1]: http://cydia.saurik.com/ | |
Daedren wrote 14 hours 34 min ago: | |
Apple just doesn't backport APIs, it's a very very very rare | |
occurence when it happens. | |
It was introduced last year alongside iOS 16 so you require the | |
latest OS, it's the norm really. | |
tialaramex wrote 14 hours 21 min ago: | |
I guess maybe I didn't explain myself well. Swift is supposedly a | |
cross platform language. This "protocol" unlike the specific clocks | |
certainly seems like something you could equally well provide on | |
say, Linux. But, it's documented as requiring (among others) iOS | |
16.0+ | |
Maybe there's a separate view onto this documentation if you care | |
about non-Apple platforms? Or maybe there's just an entirely | |
different standard library for everybody else? | |
lukeh wrote 14 hours 13 min ago: | |
Same standard library (Foundation has some differences, that's | |
another story). But the documentation on Apple's website only | |
covers their own platforms. | |
netruk44 wrote 14 hours 50 min ago: | |
I've been learning me some Swift and coming from C# I feel somewhat | |
spoiled when it comes to timing things. | |
In C# native Stopwatch class is essentially all you need for simple | |
timers with sub-millisecond precision. | |
Swift has not only the entire table of options from TFA to choose from, | |
but also additional ones like DispatchTime [0]. They might all boil | |
down to the same thing (mach_absolute_time, according to the article), | |
but from the perspective of someone trying to learn the language, it's | |
all a little confusing. | |
Especially since there's also hidden bottlenecks like the one this post | |
is about. | |
[0]: | |
[1]: https://developer.apple.com/documentation/dispatch/dispatchtim... | |
chubs wrote 14 hours 30 min ago: | |
Just use CACurrentMediaTime for that, or Date(), both simple options | |
:) | |
fathyb wrote 14 hours 12 min ago: | |
I believe `CACurrentMediaTime` depends on `QuartzCore.framework` | |
and `Date` is not monotonic. | |
I would also find it confusing if I found code doing something like | |
network retries using `CACurrentMediaTime`. | |
vlovich123 wrote 10 hours 4 min ago: | |
Clock_gettime | |
foolswisdom wrote 15 hours 28 min ago: | |
> weâre talking a mere 19 to 30 nanoseconds to get the time elapsed | |
since a reference date and compare it to a threshold. | |
The table shows 19 or 30 milliseconds for Date / NSDate. Or am I | |
misunderstanding something? | |
taspeotis wrote 15 hours 21 min ago: | |
Divide by a million | |
Medea wrote 15 hours 25 min ago: | |
"showing the median runtime of the benchmark, which is a million | |
iterations of checking the time" | |
foolswisdom wrote 13 hours 45 min ago: | |
Thanks. | |
<- back to front page |