>> Pokémon GO - REVISITING THE "HACKING" SCENE (PART 8)
Unfortunately, algorithm optimizations can sometimes play havoc with intended
security measures.
With the
official release of Pokémon GO version 0.47
being rolled out; it is only a matter of time before the iOS IPA is pulled
apart and the #re team starts the brutal effort to find the elusive hash
function location. It is the consensus that Niantic will make significant
changes to the algorithm in the hope to keep the API under wraps for just a
little longer. That said; what fundamental flaw did Niantic have with their
existing hash function that made finding the magic numbers so easy?
In our previous blog entries I discussed the nature of the hash function
and that there was a clear pattern where, as a developer, we could isolate
with exact precision when certain magic numbers would be used in the
algorithm. In effect we could focus on a subset of the magic numbers with
each iteration, hence reducing the number of candidates to brute-force.
This was already explicitly stated in the way I wrote the blog entries; but
after posting a
poll
on reddit there were conflicting views around if or not I should even be
writing this entry. In the end; knowing that the function will change - I
decided to do it for educational purposes. Even if the consensus was more
towards no, than yes - as for those that will hate me, sorry guys.
The ironic thing is; the code seems to have been compromised for the sake
of optimization.
--- pogohash.c
+++ pogohash-new.c
__uint128_t hash = 0;
for (i = 0; i < 8; i++) {
int offset = i * 16;
- if (offset >= size) {
- break;
- }
uint64_t a = read_int64(chunk + offset);
uint64_t b = read_int64(chunk + offset + 8);
hash += (__uint128_t) (a + magic_table[i * 2]) *
In the hash_chunk function there was an early exit condition,
simply removing this would make sure that every single magic number
would be referenced regardless of the length of the buffer. But of,
course; any good developer would know that there would be a buffer
overrun condition - as the algorithm would expect 128 bytes within the
chunk in order to work through the iterations.
--- pogohash.c
+++ pogohash-new.c
// copy tail, pad with zeroes
uint8_t tail[128] = {0};
int tail_size = len % 128;
memcpy(tail, in + len - tail_size, tail_size);
+ // always hash a buffer of at least 128 bytes
+ if ((num_chunks > 1) && (tail_size == 0)) num_chunks--;
+ tail_size = 128;
+
__uint128_t hash;
if (num_chunks) {
// Hash the first 128 bytes
Looking at the Hash function; the #re team already put code in
place to copy the tail (anything less than 128 bytes) padded with zero's
to a separate buffer. This would guarantee no buffer overflow - and since
128 bytes would be passed to the hash function, the ROUND_MAGIC,
FINAL_MAGIC0, FINAL_MAGIC1 and every entry in the
magic_table would be used with every hash request.
Now; of course it would make sense to re-factor the code to clean the code
to have a simple loop, instead of the conditional one it has right now:
__uint128_t hash;
hash = hash_chunk(in_pad, 128);
hash += ROUND_MAGIC;
if (num_chunks) {
while (--num_chunks) {
in_pad += 128;
hash = hash_muladd(hash, ROUND_MAGIC, hash_chunk(in_pad, 128));
}
}
Much simpler; yet would perform exactly the same thing regardless of
the buffer length.
With the IPA to be made available within the next day or so (subject to
Apple review); time will tell if Niantic made sufficient changes to their
hashing algorithm or not. Even with the above suggestion; it will not
guarentee that the community will not figure it out - CPU emulation will
prevail until they can, as part of the anti-cheat mechanism, tie in a
platform specific reference such as
SafetyNet.