22 January 2012 by Published in: Code, iphone 35 comments

The Problem

You may be interested in how to wire up or sync CoreData to a remote web service.  There are plenty of frameworks to do this, like RestKit, RestfulCoreData, and various defunct libraries (CoreResource, etc.)  The problem with these libraries is twofold:

  • They assume things about your backend (like “It’s REST!”  or “It has sane queries!”  or even “It parses!”) that you know are not going to be true of whatever backend you’re working with.  Especially dangerous are super chatty protocols that pull down more data than you need, which you know will change halfway into the project when everyone discovers that it’s slow, causing you to rewrite a bunch of fetches.  Over and over again.
  • They require you to run data queries against some arbitrary Cocoa API, which then multiplexes to both CoreData and the backend.  This is bad because it’s not portable, because if the backend API changes, your code will have to change, and because it’s unclear to what extent CoreData features like faulting and proxy arrays “just work”, which you intend to rely on for performance.

As a result of these items, people end up rolling their own sad little sync engine that is just a bit slightly custom for each application, highly coupled to the backend and changes as it changes, and cannot be effectively re-used.  Or alternatively, they end up writing their own [MyAPI doSomeKindOfFetch] helper classes that require a lot of thinking in application code to use correctly (caching policies, etc.)

Instead of doing this, you should be using NSIncrementalStore.

NSIncrementalStore

NSIncrementalStore is perhaps the best-kept secret in iOS 5.  It doesn’t show up in any advertising materials, barely rates a passing mention in WWDC tech talks, and has the world’s shortest programming guide, at only a few pages.  It is so well hidden that I only discovered it very recently.  But it’s easily my favorite new iOS 5 feature.

Essentially you can now create a custom subclass of NSPersistentStore, so that instead of your NSFetchRequest hitting a local SQLite database, it runs a method you define that can do something arbitrary to return results (like make a network request).  This means that fetching remote objects is now as easy as

NSFetchRequest *myFetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"MyEntityName"];
NSArray *results = [moc executeFetchRequest:myFetchRequest error:nil];

Even cooler, we now support network faulting. So a code snippet like this:

for(MyEntityName *entity in results)
{
    NSLog(@"%@",entity.someProperty); //potentially a network request
} 

(Of course, it doesn’t have to be. Your custom subclass can fulfill the request however the heck it wants to, including from a local cache.)

So why should you be using NSIncrementalStore? One reason is because it lets you, in application-land, use maximally expressive queries, independently of how bad the backend is today. Perhaps you are working with a terrible backend that only supports 1 API call that dumps the entire database.  You can still write powerful queries in your application to request highly-specific data.  Then you can respond to those application requests by serving up the highly-specific data from cache.

But when the backend guys finally wake up and realize it’s a bad idea and give you five different kinds of requests, you only have to make that change once.  The application is still emitting highly-specific queries.  You look at the query that the application wants you to serve and select the API call(s) that are most efficient to serve that query, doing whatever sort or filter that the backend doesn’t handle as an in-memory step.  And you only have to worry about this in one place.

And then as the backend slowly improves, you can slowly take advantage of more and more features.  As a slightly absurd example, if some day they let you execute raw SQL (the horror!) directly on the backend, you can write a little SQL emitter that takes CoreData queries and emits SQL that fulfills them.  The application continues to work at every step, it just gets incrementally faster as the network traffic gets more efficient.

And CoreData really is a fantastic API to use to talk to remote web services, because it lets you interact with remote objects as if they were local objects.  Everybody has forgotten–this was originally the problem that Objective-C / Cocoa was designed to solve!  We do not need yet another function call to retrieve a pansy flat representation of a record on a server.  ObjC has had a better solution than that for 30 years!

NSIncrementalStore – the unofficial guide

As cool as NSIncrementalStore is, it is missing a lot of love from the Apple documentation writers.  (Like any new API, there are undocumented bits, and some scary legitimate bugs, that nobody has gotten around to fixing yet.)  The best resources on it are

  • The NSIncrementalStore and NSIncrementalStoreNode class reference
  • The Incremental Store Programming Guide (which is unfortunately way too short)
  • A brief treatment in this WWDC talk which is illuminating
  • The various resources on NSAtomicStore (class references, programming guides) are also very helpful in filling in missing details.  NSAtomicStore is another type of subclassable CoreData persistent store which was introduced some time ago and is better documented.
  • (and hopefully, this blog post)

Init

The first thing you have to do, of course, is override init in your NSIncrementalStore subclass.


- (id)initWithPersistentStoreCoordinator:(NSPersistentStoreCoordinator *)root configurationName:(NSString *)name URL:(NSURL *)url options:(NSDictionary *)options {
     if (self = [super initWithPersistentStoreCoordinator:root configurationName:name URL:url options:options]) {

    //set up caching, HTTPClients, or other ivars
    }
    return self;
}

Load metadata

When you chain to the super initializer, you will get a loadMetadata: callback.  Note, and this is very important: if you fail to set the appropriate metadata here, your incremental store will be improperly initialized.  This can result in strange EXC_BAD_ACCESS errors that I see people complaining about on devforums.apple.com.

- (BOOL)loadMetadata:(NSError *__autoreleasing *)error {
    [self setMetadata:[NSDictionary dictionaryWithObjectsAndKeys:@"mystoretype",NSStoreTypeKey,@"arbitrary-uuid",NSStoreUUIDKey, nil]];
    return YES;
}

You are required to set up a NSStoreTypeKey and an NSStoreUUIDKey, and if you fail to do so the incremental store will not be initialized.  See the appropriate documentation in NSPersistentStoreCoordinator to learn more about these keys.

Setting up NSPersistentStoreCoordinator

Now, elsewhere in your application, you set up the NSPersistentStoreCoordinator.  You tell it to use the custom NSIncrementalStore like this:


NSString *mycustomstoretype = @"mystoretype";
[NSPersistentStoreCoordinator registerStoreClass:[MyIncrementalStoreSubclass class] forStoreType:mycustomstoretype];
coordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:MANAGED_OBJECT_MODEL_REF;
persistentStore = [coordinator addPersistentStoreWithType:mycustomstoretype configuration:nil URL:nil options:nil error:&err];

Easy as punch.  Notice on line 1, we set the string to “mystoretype”, which is the same string we used in our metadata.  Failing to make these strings match in my experience sometimes (but not always) causes the persistent store to fail to initialize, with an error message complaining that the strings don’t match.  So my advice to you would be to store that string in a common location (like in the .h file of the NSIncrementalStore) to ensure that they match.

Handling fetch requests

Now we have to actually handle the fetch requests.  This is where things get tricky, because this method

  • Can handle fetch requests or save (e.g. write) requests
  • Can handle fetch requests that are looking for an object, fetch requests that are looking for a count, or undefined future request types
  • Does not actually return any of the object’s properties or attributes, which are requested on a different callback
  • But is responsible for filtering or sorting on those properties

I’m going to show you just a dummy implementation to get you started, because the full scope of things you can handle in this method is extraordinary.


- (id)executeRequest:(NSPersistentStoreRequest *)request withContext:(NSManagedObjectContext *)context error:(NSError  **)error {
    if (request.requestType == NSFetchRequestType)
    {
      NSFetchRequest *fRequest = (NSFetchRequest*) request;
      if (fRequest.resultType==NSManagedObjectResultType) { //see the class reference for a discussion of the types
        if (fRequest.entityName isEqualToString:@"MyEntity") {
          //network request of some type?
          NSManagedObjectID *oid = [self newObjectIDForEntity:fRequest.entity referenceObject:@"SERVERS_UNIQUE_OBJ_ID"];
          //not shown: in-memory sorting, filtering, etc., see the class reference for a full scope of things you should be handling if the backend does not
          //not shown: caching
          return [NSArray arrayWithObject:oid];
         }
         else { return nil; }
      }
      else { return nil; }
    }
    else return nil;
} 

Note that you are responsible for handling all filters and sorts set on the fetch requests (if your backend does not).  This is a lot of work, and I am considering some way to abstract this work so that there is a “fall back” in-memory sort and filter implementation that can be re-used between NSIncrementalStore subclasses, but I have not yet thought through all the details.

Note that I am returning nil, but (oddly) not setting the error. Why is this? Because due to an undocumented bug in NSIncrementalStore (rdar://10732696), you cannot reference an error created inside an executeRequest:: call from outside the call. So if you do this:

- (id)executeRequest:(NSPersistentStoreRequest *)request withContext:(NSManagedObjectContext *)context error:(NSError  **)error {
  if (error) *error = [NSError errorWith...];
  return nil;
} //*error goes out of scope and is now dealloced up by the system, in spite of the fact that the caller in theory has a strong reference to it.

If you want your errors to survive the end of the method, you must make a manual call to objc_retain when you create them. Yes, really. File a duplicate bug with Apple.

Faulting

Allright, so now we’ve returned a set of fault objects to our application. But what happens when our application tries to access the value of a property? Our NSIncrementalStore class needs a new callback:

- (NSIncrementalStoreNode *)newValuesForObjectWithID:(NSManagedObjectID *)objectID withContext:(NSManagedObjectContext *)context error:(NSError *__autoreleasing *)error {
    //not shown: fetching myPropertyValue from network or cache
    return [[NSIncrementalStoreNode alloc] initWithObjectID:objectID withValues:[NSDictionary dictionaryWithObject:"myPropertyValue" forKey:@"myPropertyName"] version:1];
}

This method handles the fault fire of our managed objects (at least for property values). A similar callback exists that fires faults for relationship accesses.

A warning about threads

Generally speaking, when you use CoreData, you are interacting with local queries.  So calling managedObject.property is not terribly slow, and is something that can probably reasonably be done on the main thread.

However, when you introduce expensive remote queries into the mix, it’s not so simple.  Suddenly, accessing the property of an object can take a few seconds, or can potentially not be available if you’ve retired to your concrete-enforced nuclear bunker (or if the wind changes direction and you’re using AT&T).

Since you probably don’t want to do network requests on the main thread, you are going to be thinking about how to move CoreData code into the background.  CoreData threading is a scary and mystical topic, and I’ve talked to plenty of smart people who don’t really understand how it works.

The “cliff notes” answer is that everything is threadsafe except NSManagedObjectContext (e.g. the thing you use to run queries) and NSMangedObject (e.g. your entities themselves).  If you are doing cool things with threads, you need to make sure that you have a separate context for each thread and that you do not pass NSManagedObjects between threads.  You can serialize an NSManagedObject to its ID and then get a new NSManagedObject from the new context using the ID.

But a better pattern for most simple applications might be to do all CoreData access on its own thread and use callbacks to retrieve the results of queries.  I am considering adding some support for this in my CoreDataHelp project.  For example:


[CoreDataHelp executeFetchRequest: ... withCallback:^(NSArray* results, NSError *e) { ... }]  //an async method that calls you back with the results, which are automatically installed in your thread's MOC

[CoreDataHelp bgFetch:(id) managedObject key:(NSString*) key onCompletion:^(id value)]; //calls you back with the result of the property fetch

I’m still playing around with the syntax.  I like the first method quite a lot, but I have mixed feelings about whether wrapping a property fetch is a good idea in practice (or if I should try to rely on pre-caching the property values and only faulting to cache).  It may take me a few applications to settle on a pattern.

In conclusion

NSIncrementalStore is one of the best-kept secrets in iOS 5.  It has the potential to really change how we interact with remote data on iOS.  Today, integrating with web services on iOS is complicated, and doing anything nontrivial requires importing three different (conflicting) JSON parsers, dealing with SOAP fail, importing laughably bad libraries like ASIHTTPRequest that everybody seems to use for everything, doing low-level NSURLConnection hacking that is (at best) very hard to do correctly, and much more.  You need about fifteen tabs of documentation open on a second monitor just to get anything done, let alone how badly your application code gets polluted with backend implementation details.  Not to mention that one service you’re using that has taken it upon itself to produce its own poorly-written ObjC bindings that somehow has made the exact opposite design choices that you would have made for your application, and so you have to fork and hack it pretty aggressively to make it work, changes which will never make it upstream.

Now consider the alternative:  we start treating the NSIncrementalStore subclass as the way to wire up remote services to Cocoa.  (Either the remote service or the community starts wrapping up GitHub, Flickr, et. al. into NSIncrementalStores as the recommended way to talk to remove services.) Now you don’t have to think about whether they’re really using REST or only sort-of using REST, or whether their JSON really parses or which set of requests you need to make to get the data you want.  All you have to do is understand the object graph they expose (e.g. read a couple of class references), write a couple of NSFetchRequest calls, and you’re done.  The application interface is the same for every service.  This radically simplifies the complexity of writing API-consuming applications.

All the players in the iOS backend space (Parse, Kinvey, and Stackmob) are betting the farm that you would rather talk with Cocoa objects than worry about REST.  And they are right.  But with NSIncrementalStore, you can graft object support quite beautifully onto arbitrary, existing APIs.  And not only that, but you get the power of decades of improvements to CoreData out of the box (like faulting, uniquing, a powerful query language, batched queries, undo support, and much, much more).

But I think that Parse et. al. have much to worry about.  NSIncrementalStore is a complex beast, not for lesser developers, but it will gain a lot of mindshare on account of the fact that you can shoehorn existing APIs into it, something that will never be possible with cloud providers.  These two forces will cancel each other out.  Meanwhile, developers will be concerned about the long-term viability of businesses like Parse, or vendor lock-in, and if they have experience maintaining NSIncrementalStore code for GitHub et. al. they may in many cases consider NSIncrementalStore+REST a better candidate for new APIs than a proprietary service, in spite of its additional complexity.  Plus you get decades of cool CoreData tech for free.  (One area in which Parse et. al. can still win is through thread management, which is bad with CoreData as it was originally built for fast local queries, but can be fixed through appropriate and lightweight helper classes).

A call for NSIncrementalStore subclasses

So the next time you are reading Yet Another API Manual for consumption in your iOS / Mac application, consider whether the right response is to push an NSIncrementalStore subclass up to GitHub, even if it only supports a small subset of the remote API.  API bindings are an area that can really benefit from collaborative, open-source development, even if you are working on an otherwise proprietary Cocoa project.  I know that as I consume APIs going forward, I will be spawning lots of OSS NSIncrementalStore projects as a much better way to encapsulate interacting with remote data.


Want me to build your app / consult for your company / speak at your event? Good news! I'm an iOS developer for hire.

Like this post? Contribute to the coffee fund so I can write more like it.

Comments

  1. Frank
    Sun 22nd Jan 2012 at 10:44 pm

    I disagree with your premise. It’s not sad, nor a waste of time, to roll your own data interfaces. We are programmers; this is what we do. If a programmer isn’t capable of writing the code to make a RESTful API request and do something useful with the results, how likely is that programmer to be successful at doing anything complex?

    I’ve been writing code a long time (25 years). When I started, programmers were expected to know how to store data directly in binary files, build our own indexes, binary trees, and hash maps. I’m not suggesting we ought to go back to that. But what I do see, time and time again, is new developers who lean so heavily on third-party libraries that they (a) don’t really understand what their code is doing, (b) produce applications that are a tangled rat’s nest of dependencies, and (c) don’t really program so much as configure products and connect API calls.

    In my app, I use an NSURLConnection to obtain data from web services. Why not? It’s got a simple API, it automatically threads itself so I never worry about blocking the main thread, I can launch as many parallel requests as I want, and I can even cancel them before they finish. When I needed to start storing local data in my app, I chose SQLite over any of the iOS data APIs. Again, why not — it does what I need, and since I already understood both SQL and database design, all I had to learn were a few particulars about how to use this specific tool. Not a whole new storage paradigm.

    It’s a mistake to use technology just because it’s there. Use the technology if it genuinely solves a problem, and it’s a problem that would otherwise take you months or years to solve yourself. A dozen years ago, as a beginning Java developer, I can remember writing complex EJB entity beans to do simple CRUD operations. I did it because I was told it was the “right” thing to do. Today we do the same stuff via thin SQL wrappers like MyBatis or SQLite.

    Gold plating is widely seen as one of the major pitfalls of software development. Big, complex APIs like EJB or Core Data sell you on the vague idea that if you use them, your code will magically become more portable or more flexible or more performant in the future. Such promises rarely come true, and even if they did, it rarely matters, since most applications are rewritten or scrapped every 18 to 24 months. Do what works, now: don’t waste time thinking about a future that may not even exist.

  2. Drew Crawford
    Sun 22nd Jan 2012 at 11:46 pm

    The danger of not using a system library like CoreData is that you will wake up one day and discover that you have cloned it :-)

    I understand the danger of roping in large libraries (like the ones I referenced in the intro) that nobody really understands. However, CoreData is not one of those libraries–it is essentially EOF in new clothes, a library which has been solving real problems since at least 1994, if not earlier. Well before NSURLConnection was around.

    As for NSURLConnection, if you are under the impression that it is a “simple API”, you are in for a surprise. Assuming that you can even find the documentation,, most do not read it (or at least do not follow it). For example:

    The delegate should concatenate the contents of each data object delivered to build up the complete data for a URL load.

    Anecdotally, people usually set rather than append. Clicking through search results on GitHub, this is a common error.

    And that’s playing NSURLConnection on “easy mode”. Here’s hard mode:

    In rare cases, for example in the case of an HTTP load where the content type of the load data is multipart/x-mixed-replace, the delegate will receive more than one connection:didReceiveResponse: message. In the event this occurs, delegates should discard all data previously delivered by connection:didReceiveData:, and should be prepared to handle the, potentially different, MIME type reported by the newly reported URL response.

    I have literally never seen this handled correctly, outside of Apple.

    And these are just two examples. I can run through the documentation and find dozens of common antipatterns that I (unfortunately) see every day.

    Of course, you can use it a lot more safely with the new sendAsynchronousRequest:queue:completionHandler: call recently added in OS X 10.7 / iOS 5. But for code written before this method (and much code written after it), if it contains the string “NSURLRequest” in the source code it’s virtually a guarantee that it’s buggy.

    There are, of course, valid reasons not to rely on stable APIs with many eyes like CoreData. Perhaps you need to use as few proprietary APIs as possible to write code that compiles to multiple platforms. Perhaps you are doing some really high-performance work where traversing ObjC object graphs gives too much overhead vs flat C arrays. But in any case, you make this type of decision from the position of understanding exactly how much of CoreData you’ll be replicating by not using it. You make this sort of decision from a position of understanding what it’s for, and why your problem is different.

    But rarely does this happen in practice. In actual fact, people avoid CoreData because it’s big and scary and they can’t be bothered to take a week to learn it. Whenever this happens, they are inevitably doomed to replicate it poorly. Probably 10% of my income is fixing bugs in someone’s poorly implemented manual dispatch of an SQL query. I guess it keeps us employed, but it doesn’t have to be like this.

  3. Mon 23rd Jan 2012 at 5:31 pm

    NSIncrementalStore APIs are very promising, and it’s a shame that there is a lack of the full-featured documentation. Thanks a lot for a brief introduction and sharing your thoughts, every bit of information on this topic matters!

  4. Brad Gibbs
    Tue 24th Jan 2012 at 7:22 am

    Thanks for posting this! I’ve been waiting since the early betas of 10.7 for someone to blog about NSIncrementalStore. Yours is the first I’ve found. I couldn’t agree more that this will become Apple’s solution to post-Web 2.0. I’m surprised others don’t seem to agree.

    Have you considered using nested managed object contexts? According to the Core Data release notes for iOS 5, a child context can be created on a separate thread. I imagine using the Main MOC on the main thread to represent data in the UI. Child MOCs service remote calls without blocking the main thread and report back when they have data. Saving the child pushes changes up to its parent. Saving the parent saves to the persistent store.

    The presenter for the What’s New in Core Data session at WWDC last year stated that Apple did all of the heavy lifting by implementing iCloud and left solving the back-end for NSIncrementalStore as an exercise for the developer. I don’t think that’s entirely true. I think NSIncrementalStore is something Apple plans to flesh out in later releases. The ability to easily share data among many users on iPhone, iPad or Mac would be a HUGE hit, particularly among those developing for the SMB market. I’m a little disappointed they have released more documentation or a sample app, but I think this will be one of the major, heavily-advertised features of an upcoming iOS or OS X release.

  5. Zamous
    Sat 28th Jan 2012 at 5:39 pm

    @Frank – silly

    I completely agree with this. I can’t wait for this to evolve into something a little more clear on usage.

    My big question for NSIncrementalDataStore and REST is the “caching” part. I don’t want to write my own wrapper for sqlite all over again. Would be sweet if there was a way to get update notifications between two MOCs running on two different DataStores (one NSIncremental and the other sqlite). Is this possible?

    Thanks for being one of the earliest to blog on this.

  6. Drew Crawford
    Mon 30th Jan 2012 at 4:47 am

    I thought about this. My recommended approach would be to use something like the Delegation pattern. Your NSIncrementalStore owns its own MOC, its own SQL persistent store, etc., that it turns to when it serves up fetch requests. Then it can decide how to implement its own caching policy, mux between the SQLite store and the network request with extreme flexibility. If the boilerplate gets a little much, maybe you can put the caching stuff into a category or a superclass.

    Trying to route between two different persistent stores “upstream” in the coordinator is probably not a good idea if your goal is a caching policy.

  7. Mon 13th Feb 2012 at 12:22 pm

    Really, really cool stuff. I didn’t quite understand it, so I figured I would try to implement an actual NSIncrementalStore subclass. There are many things wrong (such as using blocking url fetching), but I intend to clean it up and publish it as a tutorial:

    https://github.com/chriseidhof/NSIncrementalStore-Test-Project

  8. Matt Whiteside
    Tue 14th Feb 2012 at 10:09 pm

    @Chris Eidhof — nice work. Looking forward to the tutorial.

  9. Sat 18th Feb 2012 at 10:43 am

    @Matt thanks! I just wrote an article on it:

    http://chris.eidhof.nl/post/17826914256/accessing-an-api-using-coredatas-nsincrementalstore

    @Drew: I don’t mean it as spam, I think it would benefit everybody.

  10. Matt Whiteside
    Thu 23rd Feb 2012 at 12:14 am

    Drew and Chris,

    Thanks a bunch for posting these tutorials. A couple of weeks ago, I was thinking, ‘wow this NSIncrementalStore class looks cool and useful but I have no idea where to start’. Now I have a basic NSIncrementalStore subclass working

    I too have been experimenting with loading the data asynchronously, and it does not seem straightforward. My first attempt was something like this: in newValueForRelationship:forObjectWithID:withContext:error:, make the async rest call, so then the method immediately returns nil for the relationship. Then when the async rest call completes, have a block handle the returned data, and post a notification, and observers should process it. But what I’m running into is that the objects being created from the newly returned json, are never being ‘reconnected’ with the object they were meant to be related to. Maybe returning an empty NSSet from newValueForRelationship:forObjectWithID:withContext:error: would be a better approach. Perhaps I’ll post again when I’ve had a chance to experiment with this.

    This is not a request for help, just posting some of my findings.

  11. Thu 23rd Feb 2012 at 5:32 am

    Hi Drew & Chris, Thanks for your write-ups. I started looking into NSIncrementalStore yesterday. I’ve used CoreData a lot the past 3 years as model layer in almost all the apps that we made at noodlewerk.com. I’m really looking forward to seeing NSIncrementalStore at work (itching fingers).

    About the async/sync loading discussion: isn’t it easier to make everything load synchronously and just use a child context on a background queue? In the past I’ve pretty much tried every possible way of doing “expensive” or lengthy operations with CoreData and almost always ended up creating a new background queue + context, doing the work there and then saving back into the parent/main moc (and merging if you’re still pre-iOS 5/Lion).

    @Zamous about caching: it would be nice to have some kind of transparent caching, e.g. using a local SQLite-backed NSPersistentStore. It would be nice if the same object model could be reused, although probably some extra metadata needs to be added to each object (“lastUpdatedDate”, “lastUsedDate”, etc.) in order to be able have it update and cleanup following a certain cache policy. Does that make sense?

    Martijn

  12. Thu 23rd Feb 2012 at 5:42 am

    PS: Maybe a little bit outside of the scope of this article, but touching the subject nevertheless: How to deal with a webservices that return their results as a series of pages? E.g.: http://developer.github.com/v3/#pagination

  13. Thu 23rd Feb 2012 at 7:53 am

    Re: the background queues, I never did that, would be great to have an example. Maybe worth a blog post?

    Re caching: I don’t think it’s easy to do. I think caching will need to behave differently for each app and service. And if you start to cache writes/updates, it might become a whole lot more complex (I think you need to implement synchronization). Caching reads would definitely be possible in most cases (and also easy, I guess).

    About pagination: you can do that by inspecting the fetchLimit and fetchOffset of the NSFetchRequest, I think.

  14. Drew Crawford
    Thu 23rd Feb 2012 at 4:32 pm

    Re: threading, I have been wondering if NSPrivateQueueConcurrencyType might be the right way to go.

    Re: caching, I’ve implemented something that caches to an NSInMemoryPersistentStore using the same MOM. (IncrementalStore owns its own complete CoreData stack that it uses to answer queries, like a delegation pattern). It wasn’t trivial. They keys were:

    1) Graft on some additional properties to the objects to store “created time” and “unique ID”. (Unique ID here is independent of cached vs network object, so we can determine whether or not a network object is cached.) If I did this again I would look into Configurations rather than grafting.

    2) Provide several different caching templates. For instance, I needed to support both a mutable / reload access pattern (we expect that the object can change) vs an immutable / incremental fetch access pattern (we expect to get new objects, but not update existing ones.)

    3) Subclass NSFetchRequest, so I could put cache expiration control flags on it. Emitting a query that has “WHERE expireTime < ..." doesn't make any kind of sense. 4) Use a responder chain design pattern to fulfill all the requests including relationship faults, object faults, etc. Then put your caching code as the first responder in the chain. After I do this on a couple of projects and have distilled it to general principles I will write a set of helper classes in CoreDataHelp for general use.

  15. Thu 01st Mar 2012 at 7:05 am

    hi Drew, thanks for sharing your insights.

    I’ve been playing a bit with Chris’ test project.
    My first concern was how to orchestrate fetching in the background (avoiding blocking UI) while at the same time keeping NSFetchedResultsController’s very handy change monitoring features.

    Here’s my fork of the project:
    https://github.com/martijnthe/NSIncrementalStore-Test-Project

    After playing a lot with it, I decided that:
    – it’s best to subclass NSFetchRequest / NSSaveChangesRequest (to add block properties to it that can be called async at a later point in time)
    – to have the store return immediately (either an empty array or cached results)
    – to fault objects that were fetched in the background later into the original caller’s moc + call the ‘delegate’ blocks of the request subclass.

    See NSIncrementalStore+Async.h for more details.

    Looking forward to some feedback.

    Best,
    Martijn

  16. Tom Wilson
    Sun 18th Mar 2012 at 12:56 am

    Hey this is awesome. Weird that this isn’t publicized more heavily.

    One thing though, I was just wondering why you consider ASIHTTPRequest so bad? I’ve found it to be one of the best http libraries anywhere. Not the simplest, but it’s always been able to do what I need it to do even when doing some relatively obscure things.

    I’ve always liked the look of AFNetworking but I’m worried it won’t work as soon as I need something out of the norm.

  17. Drew Crawford
    Sun 18th Mar 2012 at 6:47 am

    Just for starters, the author advises you not to use it :-)

    http://allseeing-i.com/%5Brequest_release;

  18. Tue 27th Mar 2012 at 9:48 am

    Has anyone successfully implemented persistence using an incremental store? I am getting stuck on obtainPermanentIDsForObjects:error: mainly because there doesn’t seem to be any way to allocate or configure my own NSManagedObjectID objects. Also it seems to expect me to generate the ids locally and then hand them off to the store which doesn’t work if I am using incremental ids. Any thoughts??

  19. Drew Crawford
    Tue 27th Mar 2012 at 7:08 pm

    @Quinn, Look at any of the numerous projects referenced here in the comments. You can see a sample implementation of obtainPermanentIDsForObject:error: here: https://github.com/drewcrawford/CoreDataHelp/blob/master/CoreDataHelp/DCACacheIncrementalStore.m#L91

  20. Agent
    Sat 12th May 2012 at 3:53 pm

    A persistent local cache doesn’t seem like it would be an issue. Just set up your SQLite NSPersistentStore alongside this NSIncrementalStore (implement as suggested by Martijn), and let NSPersistentStoreCoordinator hash it out.

    The relationship is {Store 1, Store 2, …} ↔ Store Coord. ↔ {Context 1, … }. As far as I understand it, a given NSManagedObject in located in an NSManagedObjectContext, and the context has a connection to several NSPersistentStores.

    So changing a managed object ought to propagate the change to all the connected stores — in this case, both the remote store and the local one. A query also executes on all stores, according to the executeRequest:withContext:error: docs. Since NSManagedObjects are unique per context, presumably different stores’s fetched managed objects with the same ID would become merged into a single managed object in the context, using the context’s merge policy.

    I bet the fetch would work like this, by the way:

    In the case of a slow store S with deferred managed object updates (i.e. it initializes all attributes to nil until the query completes, and when the query finishes, updates them again) and a fast sqlite store F, I think a managed object would be initialized with nil/F values first (the first store’s executeRequest: results), then updated with F/nil values second (the other store’s results), then S values last (as the slow store finishes its query).

    The second change would either be made with conflict resolution or would come through as a normal managed object attribute change, and the last change would definitely come through as a normal attribute change (because S would do a lookup of the managed object in the context and change its values that way).

  21. Drew Crawford
    Mon 21st May 2012 at 12:40 pm

    @Agent – the danger with your approach is merge conflicts. NSPersistentStoreCoordinator is not really prepared for the case that two stores return conflicting information. NSManagedObjectContext does have a conflict resolution mechanism, but it is A) primitive, designed for a different use case, and B) designed for merges between contexts, not merges between stores. Basically, PSC has minimal or no handling for conflicts that occur between stores.

    The other problem is that “slow” stores are not always guaranteed to actually be slow, and the behavior you describe depends on a race condition.

    It’s also a bad idea to return “blank” attributes. Attribute data is cached at many places in the stack (NSManagedObjectContext, NSPersistentStore, etc.) and is usually not cleared without extraordinary measures. If you silently update the attributes that you return from newValues… you will discover that newValues is very rarely called and nobody finds out that the values are changed.

  22. Drew Crawford
    Mon 21st May 2012 at 12:41 pm

    @Agent – the danger with your approach is merge conflicts. NSPersistentStoreCoordinator is not really prepared for the case that two stores return conflicting information. NSManagedObjectContext does have a conflict resolution mechanism, but it is A) primitive, designed for a different use case, and B) designed for merges between contexts, not merges between stores. Basically, PSC has minimal or no handling for conflicts that occur between stores.

    The other problem is that “slow” stores are not always guaranteed to actually be slow, and the behavior you describe depends on a race condition.

    It’s also a bad idea to return “blank” attributes. Attribute data is cached at many places in the stack (NSManagedObjectContext, NSPersistentStore, etc.) and is usually not cleared without extraordinary measures. If you silently update the attributes that you return from newValues… you will discover that newValues is very rarely called and nobody finds out that the values are changed.

  23. Sat 07th Jul 2012 at 3:27 pm

    Great post, great blog, and great comments on the post :) I learned a lot from all of that, thanks everyone!

    I got here thanks to this other post: /code/you-should-use-core-data/ which I really enjoyed to.
    I couldn’t help thinking what all of this meant for a framework I’ve kind of worked on in my spare time since last November called JSRestNetworkKit (https://github.com/JaviSoto/JSRestNetworkKit, the CoreaData support is not documented yet :S). I totally knew what you meant when you said (About RestKit & co.) “They assume things about your backend (like “It’s REST!”)”. My approach with JSRestNetworkKit takes care of handling the JSON responses and creating model objects with them, but the way the requests are implemented to your particular backend are up to you.
    What’s clear is that there’s stil so much work to do in this regard. I’d love to hear from you guys what you think about JSRestNetworkKit, and whether it might be cool to add a NSIncrementalStore subclass to its stack, so that even creating the requests to the backend is completely transparent to the rest of the app.

  24. Fri 13th Jul 2012 at 1:52 pm

    Thank you kindly for getting us all so excited about NSIncrementalStore.

    Perhaps you’d find my latest project to be close to what you were imagining: https://github.com/AFNetworking/AFIncrementalStore

  25. Fri 20th Jul 2012 at 10:12 am

    Defiantly a very good idea to start playing with the NSIncrementalStore.
    Mattt, Thanks a lot for putting that project up!

  26. Sat 21st Jul 2012 at 2:58 pm

    @Drew – Hi Drew, great post. I’m also trying to figure out how to implement obtainPermanentIDsForObject:error:

    You advised @Quinn to look at https://github.com/drewcrawford/CoreDataHelp/blob/master/CoreDataHelp/DCACacheIncrementalStore.m#L91 to see how. And I’m just hoping you can elaborate on what’s happening there. You have a protocol, which defines a method uniqueID, however, there is no implementation of this method in the code. So… can you explain what the reasoning here is?

  27. Drew Crawford
    Sat 21st Jul 2012 at 9:52 pm

    @Daniel,

    You’re right, I really need to document CDH better. It’s in the “saves me time” stage, but not yet in the “recommended for third-party reliance” stage of the lifecycle. Working on it.

    From CoreData’s point of view, the reference object is like a “key” that identifies an object. When a fetch request comes in, and you want to serve 50 objects, you create (or re-use) 50 reference objects (which can be any NSCopying-compatible object), wrap them up in NSManagedObjectIDs, and serve them up. When it comes time to save the object, you are told the NSManagedObjectIDs and you extract your original reference object, and in this way you know specifically what object(s) the user saves (or deletes).

    The naive thing to do here is (assuming you are talking to a remote database) is just to use its foreign key as a reference object, because that’s probably what the remote machine wants to process any update, save, delete, etc.

    The additional complexity in CDH comes because I am interested in supporting more aggressive caching scenarios (such as batched writes) in which you might not know, at the time that you “save” the object, what the foreign key actually is. I also see a more general problem: that of tracking objects not merely between one server and one client database, but in multiple stores on the client side and potentially on multiple servers, which makes building powerful software out of reusable building blocks easier, and CDH uses multiple databases internally already, which is much easier to debug. So I am using “uniqueID” in CDH to mean a unique identifier that is independent of the particular database in which an object is stored, and this is something that can really only be provided by the application because remote servers might dictate their own identifiers.

  28. Adrian
    Fri 30th Aug 2013 at 12:46 pm

    The back-end is dead. Long live the front-end. ™

    That’s my tm’d manta. Have you ever worked with a SysAdmin in the past 10 years. Me neither! That’s because AWS, Rackspace, Azure and others have replaced the need to ‘own’ machines. (I hated those otherwise useless guys that knew a handful of terminal commands.)
    The ongoing switch now is to toward a higher level or service, or a complete data storage rather than a raw machine, a la Parse, Stackmob and so forth. Now that gets rid of your back-end developer.
    Good new and bad news. Good news: Why rewrite the effectively the same damn backend over and over again. Bad news: You get rid of a whack of seasoned developers and your firm is run by script-kiddies.
    On an end note, I find Core Data to be utterly awesome while EJB killed Java.

  29. Jamie Hardt
    Sat 24th May 2014 at 2:14 pm

    “The danger of not using a system library like CoreData is that you will wake up one day and discover that you have cloned it”

    Amen. I recently did a little media database app because I was convinced a custom schema and indexes would make searches faster. By the time I had it working I’d succeeded in writing half-assed versions of NSPersistentStore and NSManagedObjectContext, NSPredicate (complete with formatted text parser!), NSFetchRequest, NSManagedObject and I’d begun the great quest of rewriting CD’s migration infrastructure.

    And I’m really not sure it runs that fast.

Comments are closed.

Powered by WordPress