Awesomely fast enhanced version of Apple's PhotoScroller, also pulls images from network.
PhotoScrollerNetwork Project
Latest Update Feb 11, 2020
Latest Code Update: v3.2 Aug 8, 2019
— IMPORTANT —
This project IS NOW OFFICIALLY unsupported!
Replaced by “PhotoScrollerSwiftPackage” and a demo/test project, “PhotoScrollerNetwork2”
[“Swift Package” that can be used with Xcode 11.
Note that “Swift Package” is a misnomer since
such packages can contain Swift, Objective C, C/C++ code, and are usable by both Swift and Objective C.
Feb 11 2020 UPDATE:
Older news
This project builds and runs using Xcode 11.3 - but fails using Xcode 11.4-beta (due to some issue with libturbojpeg)
Seriously, I don’t think anyone should use this code now given the new Swift Package.
Apple’s PhotoScroller project lets you display huge images using CATiledLayer, but only if
you pretile them first! My PhotoNetworkScroller project supports tiling large local
files (using either CGImage or libjpeg-turbo), and network fetches (using only libjpeg-turbo.
Thus, for only local use, you can use CGImage (as long as you don’t run into memory problems,
so test on older smaller devices). With libjpeg-turbo, the jpeg files are read line by line
and decompressed to disk as pre-renders RGV tiles.
This sample code:
The ImageDecoder enum defined in PhotoScrollerCommon directs the mod of decompression:
cgimageDecoder=0, // Use CGImage, libjpeg-turbo not required
libjpegTurboDecoder, // Use libjpeg-turbo, but not incrementally (used when loading a local file)
libjpegIncremental // Used when we download a file from the web, so we can process it a chunk at a time.
The code is licensed under terms as described in the attached LICENSE file.
Notes:
originally I tried to do as Apple does - receive a single jpeg file then create all the tiles on
disk as pngs. This process took around a minute on the iPhone 4 and was thus rejected.
at WWDC 2012, I talked to the OSX/IOS Kernel manager at a lab, and discussed the problem with memory pressure
that users had seen as well as my current solution. He actually said it was as good a way to deal with it on iOS as
can be done now with the current APIs. So, even though I had said I hacked a solution into this project, in the end
its actually pretty elegant!
RECENT CHANGES:
v3.2
v3.1
v3.0
v2.6
v2.5:
v2.4:
v2.3:
v2.2:
v2.2b:
v2.1a:
v2.0:
v1.7:
v1.6:
v1.5:
v 1.4:
v 1.3:
v 1.2:
v 1.1:
So, you want to use a scrolling view with zoomable images in an iOS device. You discover that Apple has this really nice sample project called “PhotoScroller”, so you download and run it.
It looks really nice and seems to be exactly what you need! And you see three jpeg images with the three images you see in the UIScrollView. But, you dig deeper, and with a growing pit in your stomach, you discover that the project is a facade - it only works since those beautiful three jpegs are pre-tiled into 800 or so small png tiles, prepared to meet the needs of the CATiledLayer backing the scrollview.
Fear not! Now you have PhotoScrollerNetwork to the rescue! Not only does this project solve the problem of using a single image in an efficient and elegant manner, but it also shows you how to fetch images from the network using Concurrent NSOperations, and then how to efficiently decode and re-format them for rapid display by the CATiledLayer. Note that for single core processors like the 3GS and 4, the decode time is additive. I challenge anyone to make this faster!
This code leverages my github Concurrent_NSOperations project (https://github.com/dhoerl/Concurrent_NSOperations), as image fetching is done using Concurrent NSOperations. The images were uploaded to my public Dropbox folder - you will see the URL if you look around.
The included Xcode 4 project has two targets, one using just Apple APIs, and the second using libjpeg-turbo, both explained below.
KNOWN BUGS:
TODO:
PhotoScollerNetwork Target: FAST AND EFFICIENT TILING USING A CGIMAGESOURCE
This target does exactly what the PhotoScroller does, but it only needs a single jpeg per view, and it dynamically creates all the tiles as sections of a larger file instead of using individual files.
Process:
obtain a complete image as a file URL or as a NSData object
create a tmp file what is the same or larger having dimension modulo 256 with a prepended scratch space of one row of tiles
once opened, unlink the file so it will actually disappear when the file descriptor is closed (old unix trick)
mmap the complete file for reading and writing (cgimageref and turbo modes), or just two rows of tiles (libjpegincrmental)
non-incrmental methods use the address returned from mmap with a CGBitmapContext, and use CGContextDrawImage to populate the bits
for each zoom out level, create a similar file half the size, and efficiently (or with vImage) draw it
for each file, rearrange the image so that continguous 256x256*4 memory chunks map exactly into one tile, in the same col/row order that the CATiledLayer draws
when the view requests a tile, provide it with an image that uses CGDataProviderCreateDirect, which knows how to mmap the image file and provide the data in a single memcpy, mapping the smallerst possbile number of pages.
In the end, you have n files, each containing image tiles which can be memcpy’d efficiently (each tile consists of a contiguous block of memory pages). If the app crashes, the files go away so no cleanup. Once the images are created and go out of scope, they are unmapped. When the scrollview needs images, only those pages needed to populate the required tiles get mapped, and only long enought to memcpy the bits.
This solution scales to huge images. The limiting factor is the amount of file space. That said, you may need to tweak the mmap strategy if you have threads mapping in several huge images. For instance, you might use a serial queue to only allows one mmap to occur at a time if you have many downloads going at once.
PhotoScollerNetworkTURBO Target: INCREMENTAL DECODING (see http://sourceforge.net/projects/libjpeg-turbo)
When you download jpegs from the internet, the processor is idling waiting for the complete image to arrive, after which it decodes the image. If it were possible to have CGImageSourceCreateIncremental incrementally decode the image as it arrives (and you feed it this data), then my job would have been done. Alas, it does not do that, and my DTS event to find out some way to cajole it to do so was wasted. Thus, you will not find CGImageSourceCreateIncremental used in this project - in no case could it be used to make the process any faster than it is.
So, when using a highly compressed images and a fast WIFI connection, a large component of the total time between starting the image fetches and their display is the decode time. Decode time is the duration of decompressing a encoded image data object into a bit map in memory.
Fortunately, libjpeg provides the mechanism to incrementally decode jpegs (well, it cannot do this for progressive images so be aware of the type). There are scant examples of this on the web so I had to spend quite a bit of timed getting it to work. While I could have used libjpeg, I tripped over the libjpeg-turbo open source library. If your have to use an external library, might as well use one that has accelleration for the ARM chips used by iOS devices. It has the added benefit that once linked into your project, you can use it for faster decoding of on-file images.
To use this feature, you have to have the libturbojpeg.a libray (and headers). You have three options:
use the installer for 1.2.0 from http://sourceforge.net/projects/libjpeg-turbo. I have not yet tested this but it should work.
download my libjpeg-turbo-builder project and do a “Build” (it uses svn to pull the source). You’ll need the latest autoconfig etc tools in this case. This way you can build either latest svn or a tagged release.
Use the libturbojpeg.a file I’ve included in this project (it’s 1.2.0 that I build myself using the Xcode project as described above).
Process:
the download starts, so allocate a jpeg decoder
when web data appears, first get the header, then allocate the full file needed to hold the image
as data arrives, the jpeg decoder:
Using an iPhone 4 running iOS 5, the sample images take around a second each to decode using CGContextDrawImage. But using incremental decoding, that time is spread out during the download (effectively loading the processor with work during a time it’s normally idling), taking that final second of delay down to effectively 0 seconds.
For this networked code, a time metric that measures the time from when the first image starts to download til the last one finishes.