before i start, this is written really sloppily and quickly as an explanation for a friend, thankyou to kimi k2.5 for making some of my notes actually readable and scraping my terminal history so it’s coherent <3

i had a Lorex Cirrus 2.6.0 APK and an Android device that boots with 16 KB memory pages. the stock APK installs, but as soon as the app tries to load its native libraries it crashes, i assume the dynamic linker on a 16 KB device just refuses any .so whose LOAD segments aren’t 16 KB-aligned?

welcome to my shitty notes on how i rebuilt the APK to make it run anyway™. oh and i also looked at the app’s TLS setup because,,,, funny leaked certs???? :3

shit i used: android-tools, android-apktool (AUR), jadx + Android Stuwudio (for tools - fuck the IDE) :3

shit i thought i’d use but didn’t (useless fucks /s): patchelf, lief, python-frida-tools (idk why i thought frida would be needed, i just remembered it wasn’t installed on this machine idk)

1. hello little APK! - “hello charlie”

an APK is just a ZIP with an android-flavoured (this shit tastes like green robot) directory layout. the first useful thing is figuring out which CPU architectures it supports.

each subdir under lib/ corresponds to an ABI:

$ unzip -l "Lorex Cirrus_2.6.0.apk" | awk '/^.*lib\//{print $4}' | awk -F/ '{print $2}' | sort -u
arm64-v8a
armeabi-v7a

yippee!!! it’s not 32bit only :3

since modern android devices are 64-bit only, fuck allat 32-bit code, baibaiiiiiii (-28MB - weight saving or whatever)

we should probably decode it with apktool though to get a real “working” tree:

apktool d "Lorex Cirrus_2.6.0.apk" -o lorex

that gives you AndroidManifest.xml (decoded), all the resources, disassembled smali under smali/, smali_classes2/, smali_classes3/, and the native libs under lib/.

2. oh it’s just a skin,,, lol

the manifest says package="com.lorex.cirrus", but every class lives under com.raysharp.camviewplus?

that’s because Lorex doesn’t actually build this app. it’s almost certainly a rebrand of RaySharp’s reference app camviewplus (similar to how Tuya behave).

you can see this leak through everywhere, even in assets/appConfig:

SUPPORT_TUTK_P2P = false
PLAYBACK_VIDEO_NUM = 4

the TUTK P2P stack (ThroughTek Kalay) is bundled, 5 whole native libs (libIOTCAPIs.so, libTUTKGlobalAPIs.so, libt2u.so, libP2PTunnelAPIs.so, libRDTAPIs.so), but disabled in this build. lorex has their own P2P backend (rsp2p.lorexservices.com:8443) and they just,,,, kept all the TUTK libs in there anyway. ~15 MB of dead binary every install :/

the rest of the native side is a who’s-who of chinese mobile SDKs: SQLCipher for the local DB, Tencent Mars logging, Baidu Push, TensorFlow Lite for the face-recognition feature, plus RaySharp’s own video/network/security libs.

3. TLS detour (skip if you don’t care)

tangent! i went down this before i actually knew what i needed to fix. skip to section 4 if you only care about the 16 KB stuff.

assets/ has three funky files: api_key.txt, ca.cer, client.p12. the api_key turned out to be a Login With Amazon dev key, those JWTs are meant to be embedded in apps so whatever. but client.p12 is a PKCS#12 client cert, which means mTLS to something.

PKCS#12 needs a password. grep the smali:

$ grep -rln 'client.p12' smali*
smali_classes3/com/raysharp/camviewplus/retrofit/https/SSLUtils.smali

that file builds the SSL context, and a few lines above the KeyStore.load() call the password is just, sitting there as a string constant: 1yhig92YFIwcbRyi. ooookay buddy….

$ openssl pkcs12 -in client.p12 -legacy -passin pass:1yhig92YFIwcbRyi -nodes \
    | openssl x509 -noout -subject -issuer
subject=C=CN, ST=Guangdong, L=Zhuhai, OU=R & D Center, O=Raysharp, CN=localhost
issuer=C=CN, ST=Guangdong, L=Zhuhai, OU=R & D Center, O=Raysharp, CN=localhost

(-legacy is needed because OpenSSL 3 said baibai the RC2-40-CBC cipher RaySharp wrapped this with back in 2019.)

CN=localhost is the red flag of all red flags. TLS hostnames are supposed to match the server you’re hitting, CN=localhost is what you get when somebody made a cert for testing and never replaced it. to make it work in prod, RaySharp set the OkHttp HostnameVerifier to literally return true, which i confirmed:`

$ grep -A2 'public verify' RetrofitUtils\$1.smali
.method public verify(Ljava/lang/String;Ljavax/net/ssl/SSLSession;)Z
    const/4 p1, 0x1
    return p1

same self-signed CN=localhost issuer is also on ca.cer, so i assumed ca.cer was the pinned trust anchor and the whole thing was a “pin our own root + ignore hostnames” setup. but then i actually probed the server:

$ openssl s_client -connect cirrusapp.lorexservices.com:443 -servername cirrusapp.lorexservices.com </dev/null 2>/dev/null \
    | openssl x509 -noout -issuer -subject -fingerprint -sha256
subject=CN=*.lorexservices.com
issuer=C=US, O=Amazon, CN=Amazon RSA 2048 M01

unsurprisingly, and surprisingly (depending on how much credit you give a random NVR/DVR manufacturer), that’s a normal publicly-trusted Amazon cert. SHA-256 A5:0A:4F:... doesn’t match the bundled B4:A1:A5:... at all. so ca.cer isn’t validating the cloud, i was wrong :(…. so i went back and re-read getCaSSLContext():

SSLContext.getInstance("TLSv1.2")
sslContext.init(null, null, new SecureRandom())

init(null, null, ...) means: use the default android trust store. despite being literally NAMED getCaSSLContext, the function never loads ca.cer lol. the cert is only used by the other SSLContext, the mTLS one, which only runs for the QR-code device-import flow where the app talks to a recorder on your LAN that presents its own RaySharp-issued cert.

so ca.cer is just for talking to RaySharp recorders. client.p12 is one shared identity baked into every install on earth, so it proves “yep i’m a real RaySharp app” but nothing about who the user is. not great, not unusual for IoT cameras. :/

anyway, moving on.

4. bye-bye armv7 <3

real plan: cut lib/armeabi-v7a/ out of the APK, re-sign, install. android’s package manager picks an ABI at install time and skips the others, so this is purely a size win, but also the easiest first step before the 16 KB stuff .

the original lorex signature dies the second we touch any content, so we throw it out and resign with android’s debug keystore (~/.android/debug.keystore, alias androiddebugkey, password android, made by Android Stuwudio the first time you ran it). that key is trusted by absolutely nobody but for sideloading it’s fine, android only cares that the signature is valid and consistent across updates, so LGTM.

BT=~/Android/Sdk/build-tools/36.1.0

cp "Lorex Cirrus_2.6.0.apk" lorex-arm64.apk

# 1. drop the 32-bit libs and the old signature
zip -dq lorex-arm64.apk "lib/armeabi-v7a/*" \
                        "META-INF/CERT.RSA" "META-INF/CERT.SF" "META-INF/MANIFEST.MF"

# 2. align (gotta run before signing for v2+)
$BT/zipalign -p -f 4 lorex-arm64.apk lorex-arm64-4K.apk

# 3. sign
$BT/apksigner sign --ks ~/.android/debug.keystore \
                   --ks-pass pass:android \
                   --ks-key-alias androiddebugkey \
                   --key-pass pass:android \
                   lorex-arm64-4K.apk

# 4. verify
$BT/apksigner verify -v lorex-arm64-4K.apk

now we end up with a 45 MB arm64-only APK that installs and runs on a normal 4 KB-page device. apksigner reports v1+v2+v3 schemes verified!

if you try to install over an existing build and the signature doesn’t match, you get INSTALL_FAILED_UPDATE_INCOMPATIBLE. uninstall the old one first: adb uninstall com.lorex.cirrus.

5. the 16 KB-page problem (the actual reason we’re here)

android 15 added support for booting with 16 KB memory pages. some pixel 8/9/10 (not 11, that will be running android 4.4.4 for some reason) builds ship that way, the official emulator also has a 16K image, and future devices are gonna keep going this direction. on a 16 KB device, the dynamic linker has two extra rules for every PT_LOAD in every .so:

  1. p_align >= 16384
  2. p_offset MOD 16384 == p_vaddr MOD 16384

the cleanest fix is to rebuild with -Wl,-z,max-page-size=16384 at link time, the linker bakes it in. but i don’t have RaySharp’s source, only the prebuilt blobs.

quick check on which libs are misaligned:

$ for so in lib/arm64-v8a/*.so; do
    align=$(readelf -lW "$so" | awk '/LOAD/{print $NF; exit}')
    echo "$align  $(basename $so)"
  done

0x10000  libIOTCAPIs.so
0x10000  libP2PTunnelAPIs.so
...
0x1000   libRSNet.so
0x1000   libRSPlay.so
0x1000   libSDKWrapper.so
0x1000   libaudio3a.so
0x1000   libsqlcipher.so
0x1000   libtensorflow-lite.so

6 libs aligned to 4 KB (0x1000), the rest at 64 KB which is incidentally fine for 16 KB pages too. of the 6, the first LOAD is usually at file offset 0 so it’s trivially congruent, but the second (writable) LOAD sits at some random offset that doesn’t satisfy the mod-16384 rule.

e.g. libRSNet.so:

LOAD  0x000000   0x000000  ...  R E  0x1000
LOAD  0x1179f48  0x117af48 ...  RW   0x1000

0x1179f48 mod 0x4000 = 0x1f48, 0x117af48 mod 0x4000 = 0x2f48. off by exactly 0x1000. fix: shift the second LOAD forward 0x1000 bytes so its offset becomes 0x117af48, which then matches the vaddr mod 0x4000.

that’s literally the whole trick. insert zero padding before each misaligned LOAD until its file offset lands on the right residue, then bump p_align to 0x4000. of course inserting bytes in the middle of an ELF means everything after shifts, so i also have to fix up every later program header, every section header, and e_shoff in the ELF header.

i tried patchelf --page-size 16384 first, it does nothing unless paired with another operation, and even then it doesn’t re-lay-out segments. LIEF (python ELF lib) can edit segment file offsets but doesn’t pad the file or fix dependent offsets, you’d be writing the same logic on top. so i just wrote it in python with struct (translation: “kimi, please do this for me”):

import struct
PAGE = 0x4000
PT_LOAD = 1

def realign(path):
    data = bytearray(open(path, 'rb').read())
    assert data[:4] == b'\x7fELF' and data[4] == 2  # ELF64

    # ELF64 header layout (little-endian): e_phoff @32, e_shoff @40,
    # e_phentsize @54, e_phnum @56, e_shentsize @58, e_shnum @60
    e_phoff  = struct.unpack_from('<Q', data, 32)[0]
    e_shoff  = struct.unpack_from('<Q', data, 40)[0]
    e_phentsize, e_phnum, e_shentsize, e_shnum = \
        struct.unpack_from('<HHHH', data, 54)

    # read all program headers
    phdrs = []
    for i in range(e_phnum):
        o = e_phoff + i * e_phentsize
        f = struct.unpack_from('<IIQQQQQQ', data, o)
        phdrs.append({'hdr': o, 'p_type': f[0],
                      'p_offset': f[2], 'p_vaddr': f[3],
                      'p_align': f[7]})

    # read section headers (only sh_offset matters)
    shdrs = [{'idx': i,
              'sh_offset': struct.unpack_from('<Q', data,
                              e_shoff + i*e_shentsize + 24)[0]}
             for i in range(e_shnum)]

    # walk LOADs in file order, compute padding needed before each
    loads = sorted([p for p in phdrs if p['p_type'] == PT_LOAD],
                   key=lambda p: p['p_offset'])
    inserts = []          # (orig_offset_to_insert_before, pad_bytes)
    cum = 0
    for ld in loads:
        new_off = ld['p_offset'] + cum
        pad = (ld['p_vaddr'] % PAGE - new_off % PAGE) % PAGE
        if pad:
            inserts.append((ld['p_offset'], pad))
            cum += pad

    if not inserts:
        # no file shift needed, just bump alignment in each LOAD
        for ph in phdrs:
            if ph['p_type'] == PT_LOAD and ph['p_align'] < PAGE:
                struct.pack_into('<Q', data, ph['hdr'] + 48, PAGE)
        open(path, 'wb').write(data)
        return

    # insert padding in reverse order so earlier offsets stay valid
    for off, pad in sorted(inserts, reverse=True):
        data[off:off] = b'\x00' * pad

    # shift function: where does byte originally at orig_off live now?
    sins = sorted(inserts)
    def shift(orig):
        s = 0
        for ins_off, p in sins:
            if ins_off <= orig: s += p
            else: break
        return s

    # update ELF header
    new_e_shoff = e_shoff + shift(e_shoff)
    struct.pack_into('<Q', data, 40, new_e_shoff)
    new_e_phoff = e_phoff + shift(e_phoff)
    if new_e_phoff != e_phoff:
        struct.pack_into('<Q', data, 32, new_e_phoff)

    # rewrite program headers at their new location
    for ph in phdrs:
        new_p_off = ph['p_offset'] + shift(ph['p_offset'])
        align = PAGE if ph['p_type'] == PT_LOAD else ph['p_align']
        pos = new_e_phoff + (ph['hdr'] - e_phoff)
        struct.pack_into('<Q', data, pos + 8,  new_p_off)   # p_offset
        struct.pack_into('<Q', data, pos + 48, align)        # p_align

    # rewrite section headers at their new location
    for sh in shdrs:
        if sh['sh_offset'] == 0:
            continue
        new_sh_off = sh['sh_offset'] + shift(sh['sh_offset'])
        struct.pack_into('<Q', data,
                         new_e_shoff + sh['idx']*e_shentsize + 24,
                         new_sh_off)

    open(path, 'wb').write(data)

stuff worth knowing:

  • shift() is the whole point. after inserting pad bytes at position X, the byte originally at X now lives at X + pad, byte at X+1 is at X+1+pad, anything strictly before X is undisturbed. so shift(orig) is just the sum of all pad values for insertions at offsets <= orig.
  • dynamic linker tags (DT_INIT, DT_STRTAB, DT_RELA, etc) hold virtual addresses not file offsets, so we don’t touch them. same for relocations and .eh_frame.
  • sections with sh_offset == 0 are typically SHT_NULL or SHT_NOBITS (.bss), skip them.
  • this works because the gap between LOAD1 and LOAD2 in a typical .so is dead space, no section’s content lives there. the linker laid out the file so LOAD2 starts on a fresh memory page. inserting zeros into that gap is harmless, the loader doesn’t care what’s there.

ran it on all 6 libs, every LOAD passed:

-- libRSNet.so --
  off=0x000000     vaddr=0x0000000000000000  align=0x4000  16k_ok=YES
  off=0x117af48    vaddr=0x000000000117af48  align=0x4000  16k_ok=YES

readelf -h -l -d ran clean, file(1) still IDs them as proper aarch64 shared objects with the original BuildID intact :3

6. repackage and sign (round 2)

same dance as before but on top of the arm64-only APK:

ROOT=/home/charlie/lorexRE
BT=~/Android/Sdk/build-tools/36.1.0

cp lorex-arm64-4K.apk lorex-arm64-16K.apk
zip -dq lorex-arm64-16K.apk \
    "META-INF/CERT.RSA" "META-INF/CERT.SF" "META-INF/MANIFEST.MF"

# replace the 6 .so files
( cd stage16k && zip -Xq "$ROOT/lorex-arm64-16K.apk" \
    lib/arm64-v8a/libRSNet.so \
    lib/arm64-v8a/libRSPlay.so \
    lib/arm64-v8a/libSDKWrapper.so \
    lib/arm64-v8a/libaudio3a.so \
    lib/arm64-v8a/libsqlcipher.so \
    lib/arm64-v8a/libtensorflow-lite.so )

# zipalign with -P 16: align uncompressed .so entries to 16 KB inside
# the zip. heads up: you can't combine -p (4 KB) and -P (custom), pick one.
$BT/zipalign -P 16 -f 4 lorex-arm64-16K.apk lorex-arm64-16K-aligned.apk

$BT/apksigner sign --ks ~/.android/debug.keystore \
                   --ks-pass pass:android \
                   --ks-key-alias androiddebugkey \
                   --key-pass pass:android \
                   lorex-arm64-16K-aligned.apk

$BT/apksigner verify -v lorex-arm64-16K-aligned.apk

native libs are still DEFLATE-compressed in the zip (same as the original APK). with android:extractNativeLibs not set in the manifest, android falls back to extracting compressed libs to /data/app/.../lib/ at install time and mmapping from there, so the in-zip alignment is just a courtesy, what actually matters at runtime is the ELF internal alignment which is what we fixed.

$ adb uninstall com.lorex.cirrus
$ adb install lorex-arm64-16K-aligned.apk

installed. launched. yippeee!

recap and thankyous

the whole thing was:

  1. apktool d to see what’s inside.
  2. realize Lorex didn’t actually write this app, RaySharp did.
  3. get distracted by TLS that looked sloppy and kinda was, but not in the way i initially thought.
  4. strip the 32-bit half with zip -d.
  5. find out 6 native libs aren’t 16 KB compatible.
  6. write a python script that pads ELF LOAD segments to the right file-offset residue and fixes every dependent offset.
  7. repackage, re-zipalign at 16 KB, re-sign with android debug key.
  8. install.

arm64-only build dropped 60 MB -> 45 MB. the 16-KB-aligned version is the same size (the 32 KB of padding across 6 libs is rounding error). both install fine on normal 4 KB-page devices too, since the 16 KB constraint is strictly stronger than the 4 KB one.

if the app had a runtime self-signature check (common in DRM-heavy or “security-conscious” IoT apps), none of this would have worked because re-signing with a debug key would’ve gotten detected. RaySharp apparently didn’t bother (shocker), or the check is only on code paths i haven’t hit yet. if it ever shows up, next move is to find the check in libRSSecurity.so, patch the comparison in smali, and rebuild. different post tho :3 (don’t count on it)