This post is for vendors, developers and implementers of the Experience API and for anyone who may one day create something with xAPI. The choices made soon will ultimately impact what you will be able to do with xAPI 3, 5 and 20 years from now.

I’m going to lay out options for how we can approach standardization and the reasoning behind them. I’m going to ask you as vendors, entrepreneurs, developers, designers from all around the world to provide me with your counsel as I ultimately must direct the effort. I ask you to support your interests with your participation as we go forward.

This is a long read. There’s nuance that’s not easily summarized. Please make the time to read, comment below, post responses that link to this statement.

In August, 2014 I participated in one of the IEEE Learning Technology Standards Committee (LTSC) monthly meetings. This particular one was special, as we were formally reviewing the Project Authorization Request (PAR) for the Experience API to become a formal standards project. Once the request is made and approved by IEEE’s New Standards Committee, we begin the last leg of a journey that started with friends riding roller coasters in Cedar Point amusement park in Sandusky, OH back in 2009.

The PAR wasn’t approved by the LTSC in that August meeting. It wasn’t the slam dunk I was naively hoping it would be. There were questions raised by the committee that may have easy responses, but the easy responses I can share aren’t necessarily the better responses we need.

This has me reflecting deeply about what kind of future with xAPI we need to enable. So here goes.

We (as citizens of the world) generally need better responses to the tiny events that color the big picture. The last couple of months, looking at the recent events in the US and around the world, looking at our own work with and in organizations that are dealing with a stagnant economy, looking at ourselves… looking at myself…  it’s so desirable to do the easiest or simplest thing in any given scenario. It’s impossible to figure out the best thing to do, because the future is filled with rabbit holes and we never can go down all of them.

We must be mindful of our options and deliberate about our choices.

When we’re talking about xAPI, we must appreciate that there are already millions of dollars invested (in the value of people’s contributed thoughts, their time and actual capital) in development and adoption. However, we have to also be mindful of the billions of dollars to be invested in xAPI going forward.

If SCORM taught us anything, it was these two things:

  • First, it taught us how to make real money in learning technology by formalizing how we commonly approach enterprise software;
  • Second, it taught us how costly it is to not be mindful or deliberate about our choices, technical and political, at the specification and standards level.

I can feel some of you bristling already about the focus I have on the financial perspective. My perspective is this, and I can’t say it strongly enough: there’s no way we can make real change in learning, education and training without it being financially viable. Money is what makes things happen. I feel a responsibility to make sure xAPI is designed well enough to encourage the investments that make the promises of the spec actionable.

As an industry, we’ve gotten this far, this fast, with xAPI, and must continue to do so, precisely because people can find ways to profit from their investments in sweat, time and capital AND make the world easier to learn from. I want to make it as easy as possible for people to innovate and solve real-world problems with this specification. I want to encourage it by keeping it as open as we can AND by making it possible for the best approaches, not just the best spin, to find adoption.

We’re on the verge of something. We can take this open specification and transform it into an international standard that will catalyze data interoperability across systems. Done well, this enables people to “own” their data, promoting citizenship and personal autonomy in a world that’s more and more digital. Or… we just take this open specification as it is, and try and keep the scope to simply transposing it for standardization, ensuring that adoption years from now will look pretty similar to what it looks like today… which looks exactly like SCORM.

As the leader of this standards effort, I want to hear what you have to say. I want to consider diverse opinions and insights that don’t come from within my echo chamber. In the end, I will ultimately make the decision about the scope of the standards project.

These are the rabbit holes and trying to go down them all, repeatedly, is exhausting.

Consider Breaking Up the Spec Into Separate Standards Efforts

Some in the LTSC are very familiar with the European Union’s policies on privacy, security, data ownership and the rights of individuals in digital spaces. In response to their concerns about “tracking,” which rightly furrows eyebrows and adds wrinkles prematurely to us all, a suggestion that gained momentum was that we consider breaking up xAPI into three separate standards efforts — three different documents to be linked together. Doing so would make it possible to isolate the areas of the existing spec that cause concern. This approach has some advantages that I’ll expand on.

Think about this like we think about WiFi.WiFi” is essentially a set of IEEE standards — its project number is 802.11. There are different forms of WiFi that co-exist — 802.11a, 802.11b, 802.11g, 802.11n… Each does a slightly different thing, but altogether any/all of these are “WiFi.” This is the frame to consider for the Experience API. “xAPI” will have its own project number (1484.xx) and it would look like this:

1484.xx.a – A standard for the Data Model would describe how xAPI statements are formatted. This would remove the need to necessarily use the Statement API or have a Learning Record Store to store data in statement format. Since the data model can be applied generally, it means that there are lots of ways statements can be used, which would encourage more adoption by lowering the barrier to entry, which (in turn) could influence a lot more activity providers. You may ask, “Why would someone only want to use the data model?”

Real use-case: One current “adopter” of xAPI is only using the data model, without an LRS. I put adopter in quotes because, according to the spec, without the LRS, he’s not conformant. Anyway, in his implementation he’s using the JSON binding for Activity Statements to track what people are doing in his software, in the context of how people use the software to accomplish specific tasks. He’s storing the statements in his own database and has no reason to share them with another system. He’s not taking in statements other than those he’s designed. He is simply using the data model to track activity in a consistent way in case one day he does need to share them, but right now there’s no reason to incur the cost of an LRS or use the Statement API.

1484.xx.b – A standard for the Statement API would then act as the means to validate statements made, whether in an LRS or not. As it is now, an LRS is really useful in concept for data transfer, but most adoption currently isn’t around sharing data across LRSs, and if you’re into doing more “big data” (or more apt “messy data”) mashups, an LRS only keeps xAPI statements. What this would then allow is the means by which any database or web application could let in or keep out statements that are junk, to use xAPI statements as a system might use any other data source. You may be asking, “Why would someone only want to use the Statement API?

Real use-case: Some of the largest educational publishers are implementing the Statement API and data model into their existing internal data storage to validate xAPI-formatted Activity Statements before accepting them into their data warehouse, along with all sorts of other data they’re tracking. They have no intention of sharing this data with any other system, and they don’t want the segregation of xAPI Statements from the other data they’re collecting. Rather, they want the xAPI data co-mingled with these other data to get a fuller analysis of how people are using their materials.

1484.xx.c – A standard for the Learning Record Store would focus on the portability of data among, the authentication and interfaces to connect various functions with other systems. Creating an LRS is the most difficult and complex part of xAPI, and its uses are scoped only to activity statements that are valid xAPI statements. Anyone who’s built an LRS themselves loathes the complexity of the work that will be involved in figuring out the privacy, security, data ownership, transport and exchange mechanisms that we’ve put off because they were too complex… but if we want real international adoption of xAPI, this will need to be addressed for the European Union. Or it won’t… and the failsafe is that the above two specifications can garner international adoption without a lot of pushback, and LRSs as they are can exist where they can.

Currently, adoption of xAPI is very LRS-centric. I personally believe that the LRS is not the most valuable part of xAPI. I enthusiastically embrace LRSs as a product category, but it’s important to remember that LRSs-as-discrete-applications was never the the intent. Rather, an LRS describes a scoped set of functionality that could be part of any app, any software, anything that reads data generated by another app or piece of software. The LRS currently is the most marketable concept people understand because we all can relate an LRS to our expectations of what a learning management system does. The key to the long-term value of standardization comes not from a spec that revolves around an LRS, but a spec that is focused on the data itself and the myriad of ways it can be exchanged. As my friend Steve Flowers put it, think about LRSs as antennae, not fortresses.

You are likely asking, “Why would anyone want to use the LRS without the Statement API or the Data Model?

Real use-case: Companies (plural) tried to build Personal Data Lockers. They wanted to make it possible to share a learner’s activity data across systems — not just keeping it inside one LRS. Rather, the intent was to have the data follow learners across systems. Rather than think of an LRS as a fortress that holds all the data, these companies were trying to follow the original vision of the LRS as antennae that send and receive data that follows the learner wherever they go. These implementations weren’t fully conformant to the spec, because sharing data according to the spec as it is… well, it is really hard. Ironically, in the two cases I’m thinking about, both companies turned their attempts at Personal Data Lockers into full LRS products.

xAPI” would then be the term that describes the general set of standards and what they enable, but each individual standard deals with something distinct, supporting the greater whole.

The hope in this approach is that the xAPI specification community has more or less held a high level of amity before, as the spec was being developed. The shine may have dulled a bit in the last year and a half as competing vendors polish their own chrome more (and some are admittedly better at it than others) but this approach may well forge new opportunities for cooperation and competition, as well as sweeten the honeypot of adoption. We need to make xAPI friendly for more adopters — starting with those who have chosen not to use a standard and built something proprietary because they couldn’t adopt only the part of our spec they needed. If we can really spur more interest and adoption, widen the possible ways in which people can adopt, every vendor participating stands to gain in a larger market. By making it easier for people to adopt specific parts of the would-be standard, we enable use cases on the fringe of our imaginations that may emerge as the strongest and most valuable use cases. By making the LRS its own standard, the things that were really difficult to address at the spec level — like how data is shared across different LRSs — would be given their due attention. By making the Data Model and the Statement API their own standards, we enable adoption for use-cases where lower barriers to entry are needed. By making the Data Model its own standard, we encourage more Activity Providers. Given how LRS-heavy adoption of xAPI is now, we need this to grow.

The risk in this approach is that it will be a freaking difficult path. It will likely break current implementations. To be honest, I don’t have a pony (read: product) in this race and breaking changes in any approach to standardization are inevitable. I don’t personally stress about that part. I worry much more about how the LTSC can manage three concurrent standards projects that must work together. It requires a lot of attention and participation, and the kind of cooperation and amity among competing interests that sometimes fails standards groups. It will take longer to create the standards this way, though some things — specifically the data model — may be able to standardize sooner.

Consider Keeping the Spec “As-Is” In One Standard

All of the above stated, it is admittedly tempting to try and keep the spec as it is, even if it narrows the spread of adoption.

One reason is that there are over 60 vendors adopting it with no interoperability issues with the data — but there are few who have tried to share data across different LRSs. Still, that’s a pretty damn significant reason on its own merit. Over a year past its release as Version 1.0, there are as many (or more) open source LRS options as there are commercial options. As I said before, the LRS isn’t really where the magic is at with xAPI, but given the framing of the specification the shape of the conversation around xAPI is the attempt to answer the question “What can an LRS do for your organization?”

To be honest, that’s a question that at least has a more immediately tangible response — an easy response. I don’t love that question, but as a pragmatist and someone who wants to see things get done and catalyze economic growth not just to make existing vendors more wealthy, but to encourage new players to compete on a level playing field so that the best products find adoption… (breathe) that’s a framing that’s focused and easy to design and develop for.

If one wants to consider that xAPI is designed to solve a fixed set of issues as a response to some current (think the last five years or so) challenges with eLearning (particularly with how we approach communication with an LMS outside of that environment), while incomplete on its own for eLearning, xAPI is an amazing success story. That we can use web services and describe a consistent (albeit imperfect) approach to handling offline activity and syncing localized activity data back to an LRS… this is a huge advancement beyond what we’ve done with SCORM — even as we acknowledge that it doesn’t replace SCORM. People still need content interoperability. xAPI is about data interoperability. They are not the same thing, and modeling our approach to data on how we approach content is tempting, but misleading.

The hope in keeping the spec together in one document is that, well… it’d be easy, right? It’s an existing spec. It works. People use it. One can argue as I have above that it’s not supposed to be about the LRS, but practically speaking, it is whatever it is. There’s plenty of room to innovate and differentiate with the specification as it exists. It may be imperfect, but it does work beyond just fixing things that we eventually figured out were really stagnating about SCORM. If we could get the scope through LTSC and IEEE’s New Standards Committee, it might only take two years and we’d have one legitimate standard that could be adopted internationally.

The risk in following this path is that we’re ignoring the opportunity to create something better. While going this path doesn’t necessarily shut down the ability for people to own their own data, or to move data around from system to system, or even to make that transfer more secure and respect privacy, we’re forever linking the components above so tightly that it will stay a closed loop. Only the learning technology community will care for this and adopt it, making it difficult for HR, Enterprise Management and ERP systems (let alone audiences we’ve never talked with who might just want to adopt the data model) because, well… it’s “learning” and it requires “all the things” in the spec to adopt. And whether you care about adoption in the EU or not, the smart money says we need to look beyond learning departments inside of enterprise. Talent is the new Learning, and if we’re to do something that finds meaningful adoption, we risk missing greener shores. And let’s not forget what happens if we need to ever revise this one document. Should we ever need to make a change — even something as simple a new transport mechanism, or even the structure of a statement — the whole spec is going to be opened for revision. It’s near impossible to to effectively manage an international standardization process that restricts scope at the document level.

The EU may not be interested in xAPI as one document that reflects the current specification because of its ambiguities around security, privacy. They may be justifiably squeamish about tracking. As Avron Barr reminded me, we’ve certainly seen with the vehement rejection of InBloom that even in the United States, we all have some concerns about the privacy and security of learning data. Certainly, though… corporate, government and military interests in Asia and Latin America may embrace the spec as it is, simply because it solves a set of very painful problems and it does that well. And… even in the case of the EU, while the standard may likely break current implementations, it’s possible to focus the accommodations for security and privacy concerns on the areas that are prone to remain stable. Still, though, the way the spec is now, it forced adopters to collect data and to make it sharable. That’s not in the best interests of every organization.

Where I Stand

I debated weighing in myself on where I stand, but for those of you on the fence, maybe it will help you wrap your head around this nuanced issue. I personally lean on the side of breaking up the spec into three standards.

My wise friend Tom King put it like this:

“This issue could be framed as a core issue of monolithic versus modular. Or perhaps framed another way– what makes a spec, any spec, good?

A monolithic approach has a few key benefits. And it seems better when there is no concern about backward compatibility and limited concern about forward flexibility. It can also help with clarity as compatibility and adoption is “all-or-nothing” for the players. As a “1 document” spec there is just one big piece to manage- likely a speed advantage if document processes offer zero parallelism– and the ‘go backs’ all happen in one larger process if changing spec’d functionality in one place impacts a different spec’d functionality.”

A modular approach has its benefits. Tom shared his thoughts about the ACID test for databases. ACID stands for atomicity, consistency, isolation and durability. These are goals that every database should strive for, and when a database fails at any one of these, it is considered to be unreliable.

Tom asked me, “In this light, what makes a standard ‘reliable?’ Does one approach favor more or fewer of the ACID elements?” A modular approach is certainly atomic; it helps to ensure there’s consistency going forward for each component; it isolates the potential impact of changing any one component without needing to change the other components; it ensures that the pieces, should they never need a change, can endure and find more and more interesting uses. Not that I think of only this litmus test, but it’s a good litmus test.

While the investment of thought, time and money that have gone into xAPI so far are significant, like Tom, I don’t know of any organization that is currently so dependent on xAPI as it exists today that their bottom-line is at significant risk by changes to the current spec or delay in standardization. Especially when I consider the long-term.

If we go with the monolithic approach, it will likely make it difficult for people to innovate beyond the initial vision. We can’t foresee all the architectural decisions that would constrain us down the road, but we know from our history with learning technology specifications that something as simple as the requirement that SCORM’s API be presented in a “web browser” crippled any natural evolution or innovation. As Tom wrote to me, “Why couldn’t someone use the non-verb-value model just for writing/storing objectives, or assessment criteria or gap analysis?” The way the spec exists today, they can’t, but it seems to me they should.

Once the standards are established, the investment and dependency on them will only increase as long as the standards are usable and useful. As Tom suggested to me, we need to adopt an approach that is both responsible and sustainable. We’re setting up a standard that, like SCORM, will impact industries for 20-30 years (at least). If we bundle too many big pieces together into one document, we’ll render the standard inflexible.

To me, the risks of not going for a modular approach simply outweigh the risks of sticking with a monolithic approach. The opportunities to be gained by going with a modular approach, in my mind, far outweigh the opportunities we can likely predict in keeping the monolithic approach.

The standard will likely break current implementations no matter how we proceed. We must seize the opportunity to address the difficult things we haven’t addressed. We couldn’t address them before otherwise we wouldn’t have a spec to work with at all. We can do this now. By working with a diverse team representing the EU and other parts of the world, we can deliver a set of standards that will be relevant and significant, globally, for years to come.

A timetable for a modular approach could look something like this:

  • 1484.xx.a (the Data Model) – Draft: 2014-2015; Vetted: 2015-2016
  • 1484.xx.b (the API) – Draft: 2015-2016; Vetted: 2016-2017
  • 1484.xx.c (the LRS) – Draft: 2015-2017; Vetted: 2017-2018

The Data Model could be done quicker. The API probably should start once that’s kinda locked down. The LRS could start concurrently and will likely take longer because scoping where it really has to change from the existing specification is going to take a lot of time and discussion (and probably some debate).

These are my thoughts. IEEE LTSC is the appropriate place to figure out the timetables. International adopters, outside of NATO allies, are not able to work on this through ADL for obvious reasons, but they have come (and will come) to IEEE, and all sectors of adoption are welcome to work together there.

One More Thing

Even though I lean one way more than the other, while I lead this effort, every member of the LTSC who participates on the standards project (or standards projects) for xAPI is a volunteer with a vote. Starting with the intent to keep the spec as-is into standardization is no guarantee that it will stay as-is. Put another way, regardless of the path I scope, it’s necessary to know that current implementations will one day break for one reason or another.

What’s important to me, and what I think should be important to you, is the process by which the standards are shaped. The only way I can deliver a standard, or set of standards, that is better for learners, organizations and everyone’s non-trivial commercial interests in xAPI is with your active involvement and commitment to the standards effort once it launches.

If you care about what the standard will be, you will need to participate to protect your interest in it. That’s going to be a pain in the ass, but it’s honestly the only way you can hope to get what you need out of the effort at the very least, active participation will help you to “read the tea leaves” on what the future holds. I can see this through to the end and work with you to make it the best damn standard possible… but where everyone has a vote, no one can just wave their magic jazz hands and influence votes.

I’m committed to making better choices (no pun intended). I can’t possibly make everyone happy with the decisions I need to make, but I can read, I can listen and be wiser for it.

Thanks for staying with me this far. 🙂


Comments

20 responses to “Standard Options Apply”

  1. Lots to digest; let me speculate on this further

  2. Dear Aaron.

    We are developing analytics solutions on top of the actual xAPI version. We are an European organisation based in Spain. It was clear for us, from the first stage, that the Euorpean normative will make difficult the adoption of “simple” LRS based on the current specification and we should go for adhoc implementations.

    Taking into account your own reflection and our perspective as developers, we will support the idea of having three separate specs. I believe flexibility and a broader scope should be the most important issues to take into account during the development of the new proposal. In order to guarantee the return of investment in the xAPI technology it would be desirable that compatible solutions could be easily adapted to other kind of environments and contexts (e.g. sport analytics, or HR talent management system).

    I believe that a modular approach will establish a more solid ground for the years to come. Early adopters could accept the breaking of the current specification if there is a clear promise of broadening the impact of xAPI in different industrial sectors.

  3. I definitely need more time with this. I’m leaning towards agreement that the standard should be broken into bits. However, I’m neither certain that I agree with the way you have it outlined above (though I’m not ready to disagree either), nor am I certain of how this would affect the specification itself.

    More to the point… simply breaking this up is not, in and of itself, a breaking change. It could LEAD to something going boom in the future, sure. However, since the adoption of this would be slowly done over the next four to five years, I think we’d have some time to adapt. In fact, during that time, we may very well find ourselves dealing with new technology or new ideas that would have us revisiting the spec anyway. So, during that time, it’s likely we’d do something that, while maybe not a breaking chance, would certainly be disruptive. And in tech, being disruptive is usually a pretty good thing.

    So I’m not sure that concerns that this would break things are warranted. If anything, it may allows us the flexibility to avoid things that would have otherwise been breaking changes later on in life!

    I am looking forward to a spirited discussion of this. I’m very curious how other folks are feeling about it.

    1. Aaron,

      Its interesting that we are moving ahead. I read 3600 words, and I am excited, concerned, surprised, worried, and happy all at the same time.

      I have played multiple roles in my life: academic, business, research, coding-and-development, and more. And, I feel we are mixing priorities by looking at the same spec from different roles since when we started (problem solving) vs now as IEEE standards (research). Depending on how we market the idea, it can be a great change, and at the same time it bears a very high risk to kill the Experience API completely. We should be very careful going on either route.

      If we are looking at priorities of few select big organisations as per your use cases. You might be right. The spec doesn’t allow you to do just about anything and call it 100% xAPI Compliant.

      But if we talk about “widespread adoption”, that would need a system that works right out of the box. People don’t want to spend too much resources (and money as you already mentioned). If everything is just too open to do your own way, we will be at a risk of vendors going their own way, and killing interoperability. Killing interoperability, means expensive adoption.

      Today, when a vendors says a package is xAPI Compliant we know the package will work on almost all platforms. But as we open it for them to choose a,b,c. Vendor might say its xAPI Compliant, but no one knows if the package will work on all xAPI enabled LMS. We always has similar issues with SCORM due to people doing different things.

      Before deciding on anything, we should ask if we will solve any problems? OR/AND, if we will create some new problems?

      Creating best data models in research standards might help in a long term, so if we are looking at working for what people will use after 10-15 years, after create another spec (may be xAPI 3.0) based on these 3 new standards, I am all for it.

      But if we are killing today’s experience api spec, expecting that division of standards into 3 parts will improve its adoption in next few years. I guess we are killing what we already built and going back 12 year to SCORM 1.2 and wait another 12 years for the next usable spec that we will churn out of it. When I think wearing my client’s hat, I might want to stop all projects and plans to migrate out of SCORM 1.2 if we will need another full scale investment after 2 years.

      At the end, any spec, that cannot turn into a usable product for end user, might take much more time in adoption.

      I like the idea of plugging the holes in current spec to make it more robust. And in parallel, learn from it, and, work on new standards to build the spec that will be more suitable tomorrow.

      – Pankaj

      1. I kind of lean the other way… xAPI as a monolithic standard defining an LRS feels a lot like it has the potential to “just become” SCORM 2.0, whereas a modular set of standards has the possibility to become the basis for a universal eLearning infrastructure with a wide set of adaptable implementations.

        This warrants further investigation and discussion, but my inclination is always toward modularity and flexibility – precisely the things that SCORM lacked and xAPI has the potential to provide.

  4. As Aaron described, there are some clear benefits to splitting out certain pieces of the xApi in particular — the data model in particular. I doubt it would have originally stood on its own without the existence of the API to feed the data model into. Calling those uses “non-conformant” currently doesn’t seem quite right, it’s more like the spec just doesn’t have anything to say about them, since it focuses on addressing the interface between an Activity Provider / xApi client and an LRS. One doesn’t need to ‘have’ an LRS as such to be conformant either, any more than content creators needed to have an LMS to be SCORM conformant. Just like SCORM conformance for content dealt with the interactions between content and an LMS, xApi conformance for Activity Providers deals with (or will deal with) what is sent to an LRS.

    All that said, it makes sense that if there are folks who are creating statements but aren’t sending them to an LRS, they might like some better definition of conformance than “these statements would be accepted by a conformant LRS if they were sent to one”. Like other commenters, I don’t see how that would necessarily be a breaking change for xApi as it stands, that data model could be pulled out and referenced.

    The suggested split between “API” and “LRS” seems less clear to me, one use case mentioned leads me to think of it as a split between a “write-only” API and the query side of the API. We’ve talked about the reverse in the spec group at one point, the ability to query w/o the ability to write. The reverse, accepting statements without any way to query them, is very intentionally non-conformant. One of the key design goals since the start of the xApi project (when everyone called it Project Tin Can), was “I want to get at my data”. Note: the way the security section is written an LRS just has to provide the ability to query, it’s OK in practice to not provide any credentials authorized to actually perform queries. This compromise is intended to put power in the hands of LRS administrators, so they can decide who or what tools have query access, not their LRS vendor.

    So, although I don’t immediately see all the proposed splits as good ones, the idea of splitting up the spec itself isn’t scary. I do have a couple of timing and process concerns though:

    First, I personally haven’t seen enough real world data on privacy and security to think we can come up with a good standardized solution to those concerns. Maybe the data is out there if we were to all put our heads together and focus on it, but we should be careful to make sure we have enough usage data to drive that process before we get too far down that road.

    Second, I strongly suggest that the current xApi spec group, moderated by ADL, is the right group to undertake such a split (if it is to occur). This group already has a good representation of the folks who originally developed the 1.0 version, and who are currently using the spec. This should make for a faster standardization process, and ensure unnecessary breaking changes are avoided.

  5. I have experience in normalisation since I participated to several standards at ISO level and before at European and French level (disclosure I’m a French national, Hi world citizens !). I favor the idea of making the standard modular. Modular means more agile, easier to adopt, evolve etc.Normalization is a lengthy process that ends up under the scrutinity of people used to find imprecisions. Being obliged to adopt everything in a block equates a never ending process.

    Now where to articulate. Functional blocks in terms of implementors seems best for me. I released today our first public piece of code using #xAPI to record product tour. I started thinking that just recording statements was nice, I ended up realizing that querying statements brought the level of persistence I just needed. So splitting on the write/read seems to me a bad sign to send to activity implementors.

    I already said that as is xAPI is just against the law of several major countries: Europe, Australia, UK and California to name a few small ones. No normalization committee outside from US will ever accept a standard that force implementors to break laws. It’s against the law to keep private records indefinitely and without the ability from the person to ask from definitive deletion (no shadow copies allowed). Even Google had to comply, we do by allowing users to drop accounts. Other countries are also taking moves to force services to store their citizens data on servers physically in their countries.

    For this reason I would suggest that a 4th part of the spec group everything about retention policy, privacy, protection, securit.

    One more reason on this fourth part. I think that xAPI is not strong enough on the matter of delegating access to Activity suppliers. If an LRS exposes a public endpoint if shouldn’t allow basic authentification or long life oAuth token. Each activity should be encouraged to obtain a token for as short as possible. This is because most activities are browser based and will expose their tokens to whoever. Tokens should have a precise scope of possible queries. A token with query access should only allow me to query statements concerning me. As an individual I wouldn’t be happy that by signing on a large MOOC my records becomes available to any other participant of this learning experience.

  6. Vladimir Goodkovsky, PhD Avatar
    Vladimir Goodkovsky, PhD

    As Tom said “Why couldn’t someone use the noun-verb-value model just for writing/storing objectives, or assessment criteria or gap analysis?” The way the spec exists today, they can’t, but it seems to me they should.

    Indeed, a form “learner did this” is good for specification of a Task performance (which is supposed to be recordable, measurable), but xAPI does not specify a Task by itself (is it “environment”?)

    In general, Objectives can be specified in a different form “Learner has an ability/knowledge/skill”. Only such Objectives can be transferable across Tasks, Domains, Practices, …

    To support branching decisions in e-learning it is necessary to have:
    1. On authoring phase, a set of Task expected performances (learners can do this, or this or that) of the learners
    2. On learning phase, an actual Task performance (learner did this) to compare against expected ones and to make branching decision.

    Can xAPI handle all the above?

    Thanks
    Vlad

  7. There’s a lot to respond to, each independent, but intertwined as well. My thoughts:

    1. Yes the spec should be split. It makes a lot of sense, and there ought to be lessons from how well the network ISO layers survived so through immense technical advances. The TinCan statement is a low level layer (Data layer?), the interaction between an unknown mobile app introducing itself to a store for the first time is another, higher layer for example. It’s easier to understand that way too.

    Would the different strands be able to maintain cross compatibility though? The ISO style isolation of layers is a key factor in its success.

    2. I think to even talk about breaking the current implementation at this stage of the xAPI will kill it. Vendors have, and in many cases still are, implementing. The authoring tools have only recently launched with xAPI ability. End users on the other hand have barely got their head around how to make use of it. It’s too early. The split should happen but it must be backwards compatible.

    3. At the moment the xAPI seems to be focussing on how to get things in, not how to get things out. You can get gigs and gigs of statements, making sense of the context is something else. TinCan recipes is a reaction to that but it doesn’t solve it. For example there is no conversation between an LRS and a new mobile app that says “Hi, I’m the ‘Meh’ app. This is what I do and these are the statements and state models to expect from me. I represent the Mega Corp organisation.”. Without that it’s just noise – noise that’s difficult for non-technical data consumers to understand.

    4. We’re still learning, we haven’t found all the pot holes yet.

    5. 4 years is too far away. If you’re telling everyone that xAPI is going to radically change over the next 4 years, what do you think will happen to any investment in xAPI? How many innovators are you going to encourage to look at the existing spec?

    1. On the point 3 — ‘data noise’ — this question is a favorite of the naysayers (along with privacy concerns). I believe the mechanism for collecting experiential data must be seen, and developed alongside – the extension of ISD to “design for data” (copyright Craig Wiggins). The use of metrics, rather than ad hoc analysis or data mining should preclude the chaotic buildup of likely-useless records.

      I also agree that it is possibly too early to disaggregate the components of the specification – because we are in an extended discovery phase which benefits from keeping early adopters’ builds working and components portable for collaboration to some extent.

      Is there a danger that we might take a ‘standards-oriented’ view rather than an ‘applications-oriented’ one before the awareness and adoption of the existing unified specification has diffused into the market sufficiently?

    2. Just a few thoughts on the issues raised:

      As a “still implementing” vendor, I’m leading a team building an LRS + analytics platform for xAPI, and I’m not worried about this. IMHO a well-implemented LRS or xAPI service should be able to conditionally validate based on the xAPI version headers in the current spec. Not a big deal if you’re building your tools well.
      This should be addressed in the standardization effort. The whole idea of the AP LRS relationship is a bit abstract for consumers and I think making the spec more flexible will make it easier to create more flexible tools that are more marketable, encouraging adoption of xAPI overall.
      Definitely.
      4 years is a long time. One thing I was curious about was the choice to go with IEEE as the standards body instead of, say, the IETF (I’m of the “rough consensus and running code” school), and the RFC-type language of the current spec made me think that was the way it was headed. But the IEEE is a standards body of competent jurisdiction, even if their process tends to be slower and more formal. But as long as draft versions of the evolving spec are published, early adopters can adopt draft features with appropriate documentation and declaration to clients, so it’s not like we necessarily have to wait for it to be finalized (think 802.11n for WiFi)

  8. I think what we’re struggling with here is a bit of a standard standards conundrum: a standardization process requires hearing the (expert) community and is a defined pathway to a still undefined outcome. The IEEE LTSC is trying to ensure due diligence in that process, so the xAPI 1.0 spec cannot already be the result of this process – and changes can be expected (a version 2.0).

    Now about the modularization: I think it makes sense — the target groups that need to have their say about these three working areas are very different from each other and e.g. the training managers that need to adapt an abstract data model for the expression of their localized vocabularies will probably not be interested in a deep technological discussion about API functionality (or about the best way to implement real-time tracking with high performance).

    This may call for planning in different and separate meetings in the development of these sub areas — so that people interested in having their say can choose which one(s) to attend.

    I also share your concern that cooking three dishes simultaneously is a bit of a burden — but it’s still one menu being served in the end and I’m sure that the cooking process can be planned and managed in a way not to overwhelm you as the organizer nor us as the cooking assistants 😉

  9. I welcome this discussion in particular as it allows for a more in-depth discussion of two issues (1 and 2) I brought up earlier in the xApi spec group.
    Short summary of my position:
    1. Modularization: I support the modularization approach and the modules proposed. In addition, other modules may be required to cover the set (which?) of intended core use cases. A closer analysis of the use cases may reveal the need for further modules (like access control).
    2. Privacy: Quite frankly: Not having data protection implemented in a clearly specified way is a blocker for the deployment of any xApi-related system in Europe, in particular here in Germany. Besides protecting access to gathered data, this has yet another aspect: Data collection. It is illegal for us to collect personal data unless they are needed for a particular purpose the learner has agreed to. Therefore we need ways to specify which data are collected (Sidenote: This means we need restrictive application profiles of xApi and tools to enforce they are respected)
    3. Backward compatibility: I wouldn’t expect that big changes in data formats will be necessary. Therefore I hope that some stakeholders involved in the spec group will provide affordable data conversion tools that can be used to keep existing implementations going through a transition priod.

    Background considerations:
    If I understand Aaron correctly, 1484.xx.a is the data model for static data, 1484.xx.b is its binding to a specific format and 1484.xx.c is the specification of communication.
    Separation of data model and binding allows for a more abstract description of uses cases and for a more clear definition of the interface with related specs (like Activity stream, RDF or 1484.xx.c). For my own work separation of the data model would allow me to work in my XML based environment using XSLT, XPath etc. for data transformation and data analysis which are not available for JSON.
    1484.xx.c might require a similar split between data model and binding.
    Having an abstract data model would make it a lot easier, to consider combinations of xApi with other specs. I wouldn’t be suprised if other already existing specs or even their implementations could be re-used to rapidly specify missing parts (e.g. XACML for specifying access control policies comes to my mind).
    I also hope that considering xApi within the ecosystem of IEEE specs will initiate a clear outline of the xApi domain, thus avoiding the “problem -> quick xApi solution” for problems which are better handled with or in combination of xApi with other specs.
    Finally, the split between 1484.xx.a/b and 1484.xx.c may also have consequences for conformance testing. It should be relatively easy, with existing technology, to provide conformance testing tools for the data model, even for profiled versions of it. On the other hand, it is hardly possible to design a test tool for a specification of communication from the green table – one needs to collect experience on what really must be tested.

    1. > I wouldn’t be surprised if other already existing specs or even their implementations
      > could be re-used to rapidly specify missing parts (e.g. XACML for specifying access
      > control policies comes to my mind).
      Very good point – that’s what are standards for after all! Which leads me to think…

      …maybe IEEE ought to be working on Privacy and Data protection standards before considering anything else? Some sort of wrapper around any piece of information. The world we live in today has shifted substantially from where it was and now we need to respond to those demands. Some form of underlying, trusted infrastructure would be handy, even if it’s a first attempt.

  10. I think separate specifications. Primarily because it allows each component of this to evolve as needed over time. The future is foggy- and separation provides move flexibility.

    The thing that really sold me on this was the data model in use without an LRS… here’s where things get really interesting.

    I believe most of this was designed primarily with users (persons) in mind. The Internet of Things wasn’t really in motion (publicly) at the time the spec was started. However, now that we know that systems are interacting both with people and each other, and learning, there is new opportunity.

    The data model can facilitate understanding all those interactions and interchange of information. There doesn’t need to be an LRS go-between (there certainly can be where needed). Systems, as they learn, would be able to communicate more effectively with users and each other, sharing needed data.

    Example:
    Temperature-sensitive warehousing with robotics.

    Warehouse A has a temperature system failure.

    Shared data advises carriers in Warehouse A that items need to be move to .

    can coordinate movement of items to account for inventory that would need to be received.

    Warehouse management alerted to the issues, and the movements in play to compensate for the issue.

    Trucking routing could communicate that products have been moved from Loading Bay of Warehouse A to .’

    Etc…

    Nice thing about this- a standard spec can allow for more plug and play components to share in lieu of trying to coordinate proprietary systems.

    The other specs can certainly participate as needed, but for me, allowing systems to more effectively learn to dance with both users and other systems (which are now ALSO learners) is what sold me on the “separate specs” approach.

    For me, it makes sense when you realize that in this day and age, systems are also learners.

    1. Yes! I have been thinking along the same lines…

  11. Having a strong web development background, I approve of a strong separation of concerns (the modular approach). I agree with Ben Clark that the second module (API) seems a little like a nebulous distinction. If you’re talking about validating the data, then you don’t need a second spec–you just check if the data matches the first spec.

    If you’re talking about processes by which you send and receive data, then that makes a little more sense. The data spec would be the educational equivalent of the JSON spec, and the API spec would be like the REST spec. It may or may not also include the security standards for send/receive/modify operations. However, these two specs would start to make the third spec redundant – you’ve already got the data format and how to fetch/retrieve data. As long as the LRS does those two things to the spec, why mandate its workings? From a web developer’s point of view, if I know what data I’m getting from a server, and how I’m getting/saving/modifying it, I don’t really care a lot more about how the server and database are functioning.

  12. This is a complex issue, but after considering the original post and the comments, I’m also starting to think that it may make sense to split the current spec into multiple specs.

    I see two natural areas of focus–the data/transport and the LRS, which accepts and presents that data. That would make me lean towards two specs, but that may just be because I’m not fully understanding why/when I might want to validate incoming data if I did not have an LRS. Maybe we would need three specs or maybe validation should just be one part of the LRS spec.

    As others have already commented, I would hope that we can strive for backwards compatibility, at least in the statements. I believe we are developing momentum and if we break compatibility too early, we could slow the rate of adoption.

    I would also echo the desire for modularity as it may help preserve simplicity. And I see simplicity as a very important feature as I believe it contributes to flexibility.

    To me, one of the best aspects of Tin Can is that it was designed to be relatively simple and extremely flexible. Instead of trying to define all possible uses, Tin Can was a framework which essentially “enabled” other activities. Because that framework was relatively simple, it wasn’t too difficult to adapt and incorporate it in a wide range of systems and products. And I believe that developers are using that framework to do some very different things–probably things that the Tin Can developers never really anticipated.

    I hope that if we can retain that simplicity, we may also keep that flexibility, so we can all continue to use the technology to do interesting and useful things.

  13. I favor the idea of making the standard modular, and think it makes sense. Just my $.02.

  14. I agree that the modular approach is the better route for future adoption. I view this as I would coding – you place specific groups of code in their own namespace to avoid confusion and overlap. Each one can be used independently of each other and they can be used in conjunction as well.

    It is up to the adopter which specs apply to their situation. As long as each spec is more or less built from the existing xAPI spec and retains backwards compatibility (as much as possible) for functionality, there shouldn’t be much breaking happening with the the existing adopters besides stating that they are now 1484.xx.x compliant (or xAPI.x) as opposed to xAPI compliant. And if there are significant coding changes that need to happen, they will at least have a base to start from.