Skip to content

We kick puppies too

Is REST over-hyped?  Is WS-* viable?  Is SOA evil?  Stefan steps up to the plate with RESTafarian SOA Killers.

QCon wrap up

Update:  My slides are here.  However, they’re not as effective without the demo.

What’s left to say? It seems everyone who participated in the SOA track, as a speaker or attendee, has already posted their thoughts. And others who could not attend have run with Stefan’s notes (go backwards from that link to see them all).

From my POV, it was a fantastic two days. All of the RESTians on the SOA track had excellent (if somewhat overlapping) presentations. The lone non-dyed-in-the-wool REST proponent, Sanjeeva, had a—uhm, err, shall I say—provocative presentation. So lets talk about some of the other sessions I was able to attend. On the Ruby track Obie Fernandez gave a presentation on “Designing RESTful Rails Applications.” I knew most of what Obie had to say already, but still learned a thing or two, and also learned that there’s more to learn. Must peak into the routing code. It was a good talk nonetheless, especially if you were Rails proficient, but new to RESTful application design. James Cox also gave an informative talk (with examples!) of what’s new in Rails 2.0. And Jay Fields gave a wonderfully intriguing talk on using Ruby to build “Business Natural Languages.” That is, highly specialized, English-like DSLs that can be used by subject matter experts to implement business rules. This talk was not a “gee, wouldn’t it be nice” kinda thing either. He’s really built them. With great success.

Brian Zimmer, lead architect at Orbitz, gave a talk on the Architecture Track about growing Orbitz both from an infrastructure and design viewpoint as well as from a U.S.-centric to a globally-capable system. Brian was a good speaker, but I can’t say I learned much. Rule #1 seems to be, don’t code yourself into a corner with U.S.-centric assumptions. Rule #2 seems to be don’t architect yourself into a corner with bad design choices. Both true. Both hard. But what really moved this talk from “green” to “yellow” for me, was his discussion on caching. In order for Orbitz to not beat the crap out of their partner’s systems, they cache results for a period of time (presumably based on SLAs, since there’s no cache control info on the wire), but their original, home-grown, caching solution didn’t scale. Brian described this system via the metaphor of a block of mailboxes that you might see in a post office or apartment building that could only hold so much information and only for a specific partner. And my thought was, well, who the heck designed that? They resolved the problem by purchasing a commercial caching solution, but he wouldn’t say which one.

James Noble presented Thursday’s keynote “The Lego Hypothesis.” With a somewhat over-the-top presentation style, James said the right thing: Google is our software repository, and developers reuse code snippets and libraries just as people have been saying we should — albeit, without being weighed down by issues like “correctness.”

I arrived late on Wednesday and so could only take in two sessions. The first had Chet Haase, Erik Meijer (LINQ), Charlie Nutter (JRuby), Joshua Bloch (large pieces of Java), and Rod Johnson (Spring) debating the future of Java. Everyone was (obviously) knowledgeable and pragmatic. They did not, however, all agree. I felt myself leaning towards Charlie’s answers which can be summed up as “it’s up to you, now.” I also managed to attend Wednesday’s closing presentation, “50 in 50,” delivered by the eminent Dr. Richard Gabriel; a bizarre, delightful, entertaining multimedia romp through the history of programming languages. My favorite? HQ9+.

That’s it. A wonderful conference made better by being able to meet many people face to face for the first time, including not only my fellow track speakers but Stu Charlton, Mike Herrick, Patrick Logan, and many others.

Speaking at QCon Next Week

This is a long-delayed post.

Next week is QCon San Francisco. Stefan generously offered me a slot in the “Connecting SOA and the Web: How much REST do we need” track, alongside Steve Vinoski, Sanjiva Weerawarana, Dan Diephouse, and other luminaries. Woot.

For my part, I’ll be demonstrating the “ilities” of REST. Yes, rather than just explain things and have the audience take my word for it, I’ll be showing running code of a real (albeit simple) RESTful application. Thus, for the last two weeks, I’ve been busily learning Rails and enjoying its support for REST. It’s a simple app, and if I knew when I started what I know now, it could have been done in a day or two. No loss, I’ll be able to leverage this time for the REST Workshops I deliver as part of my Burton Group gig.

Of course the intent of the demo is not to show off Rails (I don’t plan too) or to prove how leet I am (I’m not), but to put an emphasis on the fact that something as mundane as a Web application, when properly designed, leads to accessible information and evolvable systems. I’d love to demonstrate scalability and performance too, but, hey, I’ll only have one laptop with me. (Oh, and VMWare rocks.)

Hope to see you there.

The Human Condition

So the battle still rages over on SOA Discuss. However, the list server seems to have crapped out. So, while I haven’t been an active participant by any means, I’m pulling this from my sent mail folder and posting it here.

Anne Thomas Manes wrote:

The problem is caused by the root culture of IT — project-driven funding models, a cobbler’s kids perspective on investing in infrastructure that helps IT (rather than a particular project), and a propensity to never decommission applications. IT systems have grown organically for the last 40 years. They’re a mess. It requires a fundamental change in the way IT operates as a service provider within the organization.

Now Anne’s my new hero (of course, we work together). This hits the nail on the head. Business is culpable. IT is culpable. I’m culpable. So are you. I would refine Anne’s point by saying that the ailments we’re discussing are not entirely cultural. I would add two other factors.

1. A sensible desire on the business’s part to maximize its investment. To not fix what isn’t truly broken. If a business in 1990 spent $2M and 2 years building a widget management system, and that system was perfectly designed and executed, then there’s an understandable reluctance to build it again. Similarly, from the IT perspective, if there’s an existing system that used to meet all of your needs but now only meets 75% of them, it seems sensible to extend the system, rather than rebuild it. It’s the same thinking that has me maintaining a house that’s over 100 years old, even though it leaks heat like a sieve and has antiquated wiring on the second floor. (You’re not allowed to beat up on that analogy.) This issue is deeply impacted by the incredible rate of change in technology, and all the things no one saw coming: PCs, universal networking, the Web, open source, etc.

2. People and groups of people are independent actors. They have their own biases, knowledge base, desires, needs, and motivations. Any time an overarching strategy tries to unify all the disparate players, it comes into friction with this, slowing it down and ultimately causing it to stop.

I suspect that there’s no way to make all these things go away, and if we want to drive better business through technology, our planning has to account for these three factors (Anne’s cultural issues plus my two). It’s likely that the cultural issues can actually be fixed. We can change funding models and processes. We can even effect a change in mindset. However, the other two seem to be intrinsic to the human condition, and I think our planning is simply going to have incorporate that. Thus, we will need to discover processes and technologies that allow systems to be built at minimal costs (time and dollars) and that can, in effect, be thrown away. And we need to allow IT and the business to act as independently as possible from some central governing authority. A very delicate balance in both cases.

From a purely technical POV (and recognizing my own biases), it seems to me that we can partially address the cheap-fast-and-gone issue by moderating complexity. This might entail things such as promoting dynamic languages; building smaller, minimally functional components (using your favorite technology, but erring towards the simplest); hiding the brittle things behind facades; making strategic bets on very few technologies or technology patterns, and so on. Regarding the people-are-people thing, I think this means that we cannot dictate many universal behaviors. We can only strongly encourage (preferably by example) that players do what they can to minimize friction between themselves and others (technology wise, that is.) via the use of standards and by the use of system designs that allow actors to evolve independently.

What that doesn’t address is how do we get IT to do what the business wants. Personally, I think IT does a pretty good job of that already. What seems to be the issue is that IT is building what business units want today. Both IT and the business units are not planning for tomorrow, nor are they are thinking about the rest of the business. And this is where competent CIOs and their minions, business analysts such as Steve [Jones] and Rob [Eamon], industry analysts such as Anne [Thomas Manes] and Nick [Gall], and consultants such as myself need to deliver on. That is, given the constraints above: project driven cultures within IT and without, rapid technological and business change in the face of sunk costs, and the fact that the enterprise (indeed the world) is an anarchic place, how do we get people to build systems that meet the needs of today and tomorrow? You can call this enterprise architecture if you want.

Towards a better network programming taxonomy

Previously I defined and redefined some terms to be used when discussing networked systems. With feedback from Sam Ruby, Josh Haberman (in the comments), and Steve Jones, plus smoe more thought on my part, I would like to present a more refined version of the same. With this post I’d like to introduce some new terms and revisit some old terms, all arranged in a taxonomical structure from most to least general. Note, the definitions given here will differ in places from those given earlier and from some defintions in general use.

Why bother? Principally because the terms in current use are heavily overloaded, especially SOA, which has at least three distinct definitions depending on who you ask. It’s my hope that some agreed upon terminology will help keep people from talking past each other and to better crystalize our thinking on the subject.

This taxonomy only addresses network programming models that are to be used for general-purpose systems. It does not care about anything below layer 7 of the OSI network model and it does not care abouut dedicated network protocols such as LDAP or SQL*Net.

At the top of the stack there is network-centric computing (NCC). NCC is the generally held belief by all interested parties: developers, architects, business analysts, and the business itslef, that exposing functionality directly on the network in a standard and interoperable manner is superior to building application silos. NCC is completely technology independent; it is an approach to systems design, a mind set. Why not just use the existing term distributed computing? Mainly because that term includes extremely tightly bound systems, including distributed systems that are in fact a single system that happens to reside on multiple machines; such as Oracle RAC, or systems that parralellize processing, such as SETI@home.

Next we have NCC designs that take into account the vagaries of the network. These are termed netwok-oriented computing (NOC) designs. NOC systems have incorporated into their design the fact that networks are not reliable, do not have zero latency, etc. In contrast, there are NCC designs that attempt to abstract the network away, such as system exhibits a network-independent computing (NIC) design. Such a design attempts to ease the burden on the developer by modelling remote systems as if they were local.

Either model can be further refined by applying an architectural style. “An architectural style is a coordinated set of architectural constraints that has been given a name for ease of reference.1” In other words, by restricting the universe of possible design approaches an architectural style becomes a way of talking about architectural designs. An architecural design, then, is the blueprint of a running system. And an architecture is the system itself. Typically, as I do here, the latter two terms are conflated so that an architecure is an architectural design. By loose analogy, Art Deco is an architectural style, the plans for the Chrysler Building are an architecural design, and the Chrysler Building itself is an architecture.

The most commonly applied architecural style used in NOC systems is Representational State Transfer (REST). REST is a style that has the resource as it’s key abstraction, where a resource is anything important enough to name. A resource has state that changes over time, and clients interact with the resource via a uniform interface. Critically, resources also point to other related or interesting resources. REST as an architecural style is one step removed from an architecture. One possible implementation of the REST style is the resource-oriented architecture (ROA). ROA is the set of best practices encouraging the principled use of HTTP to create RESTful systems.

As regards NIC designs (e.g. CORBA, DCOM, and WS-*) there is no identified architecural style. Such architectures have evolved over time to address the needs of their creators and users, and later systems are typically modeled after earlier systems. Even so, it is possible to infer certain fine-grained styles inherent in these architecures, such as distributed objects or client server. One thing such systems have in common is the promotion of the interface as the key abstraction, where the interface describes a collection of non-unifrom operations/methods exposed by a service. Collectively, we can refer to these systems as having a service-oriented architecture (SOA).

In my earlier post I claimed that while governance and business process analysis are selfevidently important, that SOA (and ROA) did not exhibit sufficient uniqueness from traitional application design and development to warrant specialized versions of these. I’m backing off that stance. After all, if devolving a network-accessible application into its constituent components can justify not one but six distinct concepts, then certainly the same is true for governance and business analysis. In addition, NCC is indeed different enough that it requires a level and style of governance that needs to be clearly spelled out and executed. Ditto business analysis. I firmly reject, however, the idea that governance and/or analysis equal architecture and that the technology is not important.

As pertains to governance, ROA and SOA are different enough from each other that the policies and processes that govern them should also be distinct. Hence we’ll give them the simple terms ROA governance and SOA governance. But while the governance processes will be different, the governance needs will often be the same. For instance, either architecture could benefit from a directory. In today’s SOA environments that typically means UDDI. While in a ROA environment it is likely a Web page or simply the result of a search. (Or vice-versa.) Therefore, the common governance abstractions (directories, repositories, documentation, etc.) are given the term network-centric governance.

Regarding business process analysis, I don’t think that ROA and SOA warrant specialized techniques (but I’m willing to be convinced otherwise), thus we have simply networked business process analysis.

Summarizing the above taxonomically gives us:

– Network-centric computing (NCC)
—– Network-oriented computing (NOC)
——– REST
———– ROA
—–Network-independent computing (NIC)
——– SOA
– Network-centric governance
—– SOA governance
—– ROA governance
– Networked business process analysis

(I would have used nested unordered lists, but WordPress keeps horking that up.)

To use these terms in discussion we might have the following:An enterprise that recognizes that application silos are to be avoided and exposes functionality directly on the network for general consumption in a standardized and interoperable manner is undertaking a network-centric computing initiative (an NCC initiative). They may choose to acknowledge that the network imposes certain architectural constraints on this initiative and thus choose a network-oriented computing approach. This would be especially likely whenever crossing network boundaries, but is considered by some to be a good approach to most NCC designs. Alternately, they may choose to abstract the network away by using a network-independent computing approach. This is especially likely when the clients and services are on a single, well managed, local area network.

What is SOA?

Once again the participants in the SOA discussion group have got themselves all riled up about what exactly SOA is and why it may or may not be working. Here’s my two cents. We’ll start with another history lesson. (Skip it and get to the meaty bits.)

Once upon a time, the employees of Example Corp. complained to the Finance department that they were tired of using spreadsheets to file expense reports. So Finance went to IT and said, “Build me an expense report management system — and I want it by the end of the year.” And IT went among the people in HR and Finance and the company at large and gathered requirements and analyzed business processes. After which they returned to Finance and said, “The end of the year’s out. How ’bout June?”

“Just get it done,” said Finance.

In December of the following year IT delivered a system that more or less met most of the requirements they had gathered. “Fine. Sure. Whatever,” everyone said.

Then one day, Finance came back to IT and said, “Hey, I need to pull all expense data into our new ERP system.”

“No problem,” said IT, “We’ll code up a batch process that produces a monthly report and then we’ll insert that information right into the ERP system’s database.”

“Erm, Okay.” said Finance, and went away bewildered. But some months later Finance returned to IT and said, “We’ve upgraded the ERP system, and now that expense thingy isn’t working anymore.” As it happens IT had just been visited by Consulting who had said, “My consultants are tired of entering expense data twice. Fix it.”

IT was sullen. How were they ever going to integrate all these systems? And shouldn’t they prepare ahead of time for the pending acquisition of Sample Systems?

“We could publish our database schema,” said one IT member.

“Won’t work. People will enter things wrong,” said the lead developer.

“Stored procedures?”

“Maybe, but that means deploying database client libraries everywhere.”

“We could create an API that we distribute to everyone who asks,” suggested another IT member.

“Too many languages, frameworks, and operating systems,” replied the lead developer.

“Didn’t someone just buy an EAI engine? Couldn’t we use that?” asked a third developer. Everyone laughed.

“What about CORBA?” queried the new guy.

“What about what?”

“CORBA. Look, see. Interoperable remote procedure calls. Keep the business logic over here, publish the interface, let the client worry about the rest. I’ve already built a prototype.”

The IT people were impressed. They showed the prototype to Finance and Consulting. The prototype was quickly moved into production. Flush with success, the lead developer said, “You know what, we should make everyone do this. From now on no more application silos. Everyone must make functionality available over the network.” But few listened. The Microsoft guys said that it sounded right in principle, but we should use DCOM. The mainframe guys went on building CICS apps. And, frankly, it went right over the heads of the PowerBuilder and ColdFusion guys. Worse, CORBA wasn’t quite as easy to use as it seemed. It was complex, nothing interoperated, it didn’t work through firewalls, and a bewildering number of specs were coming out of the OMG.

“Well, how about this new SOAP thing,” said the new new guy. “It’s simple, it works through firewalls, all the vendors are on board, and, look, there’s only three specs, and we don’t need this UDDI one.” There was much rejoicing. “This just has to work,” said the lead developer (now promoted to enterprise architect). “I can’t prove it, but I can feel it in my bones. We just need to give this idea a name.”

“I’ve heard it called service-oriented architecture,” said the new new guy.

“Service-oriented architecture, ay? I like it. Nice acronym too. SOA. Lets pronounce it soh-uh.”

Soon, the word spread throughout the land: No more silos. Make functionality available on the network using standardized, interoperable protocols. We’re service-oriented now. And, believe it or not, people got it. Now that they could see that it just might work, it seemed like a painfully obvious idea. Building silos is bad. Exposing functionality on the network is good.

And… Well, you know the rest. They picked the wrong technology again, despite the fact that the right choice was staring them in the face. Like CORBA before it, SOAP wasn’t quite as easy to use as it seemed, interoperability was problematic, and a bewildering number of specs were coming out of the W3C, OASIS, and the vendors themselves. Not only that, there was a big pile-on of suspect products, pie-in-the-sky promises, and ever changing architectures. But, really, all that’s besides the point. The point is that more people than not understood the value of deploying standards-based, network accessible business logic.

So, then, what is SOA? For one thing, SOA is misnamed. It’s not an architecture in any sense of the word. It is, to use a Burton Group phrase, a mind set. It is the generally held belief that when implementing systems one should expose system functionality for general consumption directly from the network, as well as or instead of burying it behind a user interface. It is, as well, the belief that there is a great deal of value to be generated by retrofitting network accessibility into most existing systems. And it is the belief that this can only work if the means of doing so aren’t locked to a particular language, framework, operating system, vendor, or network architecture.

Another problem with the SOA name is the “service” bit. At least for me, the term “service” connotes a collection of non-uniform operations. I don’t even like the phrase “REST Web services.” Certainly, SOAP/WS-*, CORBA, DCOM, etc. fit this definition. But REST? Not so much. In REST the key abstraction is the resource, not the service interface. Therefore SOA (and I know this is not anyone’s strict definition) encompasses the above mind set, but includes SOAP and similar technologies and excludes REST.

A better name for SOA, then, might be network-oriented computing (NOC). This encompasses both WS-* and REST (and most everything else from the socket level up). We can, if we want, make SOA and resource-oriented architecture (ROA) a subset of NOC. In which case the “architecture” bit makes sense again.

“But wait,” I hear my SOA-loving readers say. “SOA is not about exposing business logic on the network. That’s just a technology thing. SOA is about the business! CxOs and business units don’t care about technology, they will only pay for business solutions.” Which always makes me scratch my head. What exactly does IT ever do that’s not about the business? Do they not work for the same company as the other BUs? Is a firewall about the business? Of course it is; there’s a business requirement to maintain information security. Is a router about the business? Obviously, the business is demanding networked communications. Is an application server about the business? Yep, having someone else write all the plumbing gets systems out faster, and that’s a business requirement. How about Agile? That too is a business requirement, faster, better software. Testing? Yes. VoIP? Yes. SOA? Yes.

“No, no,” the SOA-is-business advocates reply. “It’s not just ‘business’ it’s better alignment with business. You see, if we can identify a business process such as ‘open new account,’ then we can create a service called OpenNewAccount. You see how those things line up there.” All well and good, say I, but as Stu and Steve say, “Things change. And besides, what if you’re in perfect alignment with the business, but the system doesn’t work? What if it doesn’t scale? What if clients can’t use it? What if the programmers can’t code it?”

“Well, it’s not just about business alignment,” another group of SOA advocates claim. “It’s really about governance.” Again, I’m scratching my head here. Everything is about governance! Software development (network-oriented or not) is about governance: ‘You must use a version control system; you must write unit tests.’ Moving systems into production is about governance. Updating a routing table is about governance. Hiring a new employee is about governance. Buying a plane ticket is about governance.

“No, no.” They go on. “With SOA there’s new things to govern.” That’s true, there are. But really, is it that much different than any other governance process? Not really.

(By the way, I’m also on record as saying that REST requires less governance than WS-*. While others might say the governance needs are the same. I stand by what I say by noting that if there’s any chance in hell of getting all this to work, it’s going to require a truck-load of governance.)

So, that’s it. As Stefan also points out, SOA today has a number of fluid definitions: It’s the notion of tearing down silos and making functionality available on the network (frequently with the WS-* stack implied), it’s the use of governance to ensure that people do this right, and its the alignment of business with IT. If any of these can be considered more right than the other (by usage or historical precedent), then I would have to say it’s the first.

No matter which definition works for you, though, SOA is misnamed. So I’ll leave you with some updates to your lexicon:

  • Network Oriented Computing (NOC): An approach to computing that makes business logic available over the network in a standardized and interoperable manner.
    • Service Oriented Architecture (SOA): A technical approach to NOC that has a non-uniform service interface as its principle abstraction. Today, SOAP/WS-* is the chief implementation approach.
    • Resource Oriented Architecture (ROA): A technical approach to NOC that has an addressable, stateful resource as its principle abstraction. Today, REST/HTTP is the chief implementation approach.
  • Business Service Architecture (BSA): An unnecessary term (also not an architecture) that tries to make the obvious something special. Aka, business analysis. Aka, requirements gathering.

Slides from Roy’s presentation

Over on REST-Discuss, Roy Fielding dropped an innocuous message containing a link to the slides he’s delivering (has delivered) today at RailsConf Europe.  I’m pleased to see that much of the material lines up nicely with my own REST Easy workshop.

WordPress and Atompub Revisited

Tim beat me to the post, but as of around midnight last night we have no known issues with Atompub support in WordPress 2.3. Woot!

My dear departed father use to give me this piece of advice (left over from his Navy days), “never volunteer.” But I did, and I’m happier for it. This is the first time I’ve contributed to an open source project, and you know what, it’s a lot of fun—and rewarding. And working with Sam, Tim, Elias, and Joseph was a distinct honor and pleasure.

That said, Tim gives me too much credit. The bulk of the code is Elias’s. All I did was patch some bugs and bring the code up to compliance with the latest spec. Sam nailed some gnarly ones too, and automated everything—a habit I have to get into. It took a little longer than expected because I had to relearn PHP, which I haven’t used since 2001, and go spelunking through the WordPress codebase.

Currently, we expose posts and uploads (media entries). Once 2.3 is released I hope to add support in 2.4 for WordPress pages and comments among other things. I might also be convinced to back-port this to WP 2.2 if anyone’s interested.

REST Workshop Slides Available

You may or may not know about Burton Group’s Catalyst Conference that’s held twice a year; once in North America and once in Europe. But at the North American conference we offer in-depth, 4-8 hour workshops on a variety of subjects. This year, I created and delivered a four hour workshop on REST, though it went for 4.5 hours and could easily have been six. The workshop gave a concise overview of REST, a lengthy tutorial on how to build RESTful systems, and a brief comparison of REST with WS-*. It even include working code to illustrate most of the concepts.

Well, after just a little bit of cajoling on my part, Burton Group is making the REST Workshop slides available—for free—to everyone. Yes, they’re behind a registration wall. And, yes, you’re likely to be contacted by us if you download them. But, if I do say so myself, it’s a really good deck. It was even rated the best workshop at Catalyst by attendees. Now, paging through some slides is not going to make you all knowledgeable, but if you haven’t got your mind fully around REST, maybe they’ll help. More important to the readers of this blog is that you now have a resource that you can repurpose to help sell and explain REST to your colleagues and managers.

As mentioned, the workshop as delivered also showcased some running code. I’ve left the code out of this offering for several reasons. Mainly code quality. I wrote the services in Java using Restlet 1.0 (and the clients in Ruby). But Restlet’s a moving target, and is now at 1.4. Furthermore, I wrote it fast, while learning Restlet at the same time. And I ignored everything except what was being demonstrated, like error handling and comments. In other words, while it all works, it’s ugly and out of date. BTW, this explains why a number of slides are given over to Restlet architecture.

Aside: Restlet rocks. The Restlet mailing list rocks. Jerome Louvel rocks. The documentation, well, it doesn’t rock. Note to Tim O’Reilly: Please give Jermoe enough money to live on for 6-12 months and have him write a Restlet book.

Burton Group also asked me to create a “Take 5″ on the subject. A Take 5 is an audio enhanced five minute PowerPoint show on a pertinent topic. The REST Take 5 is really 15 minutes, but gives a good 40,000 foot, managerial overview of REST. (That link goes straight to the 10MB PPT deck, unless you need to register or login. You’ll need to view the deck in slide show mode to hear the voice over.) Give it a listen over your coffee this morning, email the link to your boss and the EA team, speak truth to power and all that.

Anyway, I hope you go through the trouble of downloading this material. Largely, it was you that created it (you might even find yourself quoted in there); I just compiled it. Of course, any errors are mine, so let me know if you find any and I’ll fix them. And let me know if you’d like to have me present this to your organization (it’s much better with a narrator to provide the details and answer questions).

On WordPress and Atompub

In my earlier post about appfs I noted that WordPress wasn’t up to snuff as far as supporting Atompub. Offline, Tim Bray, Elias Torres, and I had a brief discussion. It seems that many of the bugs that I’ve fixed, and many more besides, have already been fixed by Elias. The patch submitted to WordPress is here: http://trac.wordpress.org/ticket/4191. Alas, while this patch is tagged for inclusion in 2.3, there doesn’t seem to be much activity around it.

I had indicated that I will fix WordPress Atompub bugs as I find them, and keep a working version over here. But given everything Elias has already done (and the WordPress team has not), I’m no longer sure that’s a good idea. It looks like us WordPress users are simply out in the cold for a while when it comes to proper Atompub support.

I will continue to maintain and enhance appfs, but I think I’ll start testing against Abdera.