hrefna,
@hrefna@hachyderm.io avatar

I'm not going to talk about the specifics of this request (https://fosstodon.org/@mo8it/112056453394255413) but just note a few things as they relate to protocols like ActivityPub:

  1. When we talk about "semantic meaning" this is exactly the sort of problem that we need to think about how we solve it. If I am searching for, or post, using "poly" it should be clear whether I am looking for "Polynesian" or "Polyamory". If I talk about rust it should be clear whether I am talking "Rust the movie" xor "rust lang"

1/

hrefna,
@hrefna@hachyderm.io avatar
  1. But when we talk about protocols, we seem to not care about this. The tools we have are more focused on the structural differences. Article vs. Document vs. Note. We don't agree even there about what each of them mean. But we don't address at all that underlaying meaning, and don't have consistent patterns around communicating it.

  2. Just the fact of using JSON-LD, or any other linked data system, does not fix this nor provide consistent mechanisms for solving this.

2/

hrefna,
@hrefna@hachyderm.io avatar

The way I would solve this personally requires standardizing the frame that contains the data. It's easier to have flexibility in the data if the structure that the data goes into is highly regulated and consistent. This doesn't fix the problem per se, but it lets you more effectively use tools that help address it.

But that's a poor fit for the protocols and technologies we use today in this space.

Something to consider, then, is how do we fix this?

3/3

smallcircles,
@smallcircles@social.coop avatar

@hrefna

> standardizing the frame

Meaning of frame as in context or setting?

In DDD terms seek agreement - among a group of peers/stakeholders - on meaning + naming (UL) of concepts in a bounded context, use those consistently, including in specs for a domain-specific AP extension.

If interop means "adhere to that spec", its fixed if this collab happens, and JSON-LD formatted msgs are impl detail.

Unless "AP is a linked data standard and supports ontology mapping", and we are far from home.

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles As in nested protocols operating on different domains.

As an example:

  • Message
    ** Tags
    *** #'rust: <rustlang uri>
    ** Body
    *** type: text/plain
    *** Content
    **** "This is a good example of <<#'rust>>'s affine types"

It's then part of the protocol that when you see <<#'rust>> you look first in the tags to find what URI it references, and that reference goes to a semantically meaningful location.

But it's not just that you can do it, it's that every piece is understood.

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles

Message is semantically just saying "here's some content" and can be processed by any MessageProcessor. It has standardized fields.

Tags is a list of "things that appear on the Content with a specialized syntax and refer to a semantically meaningful—and thus not a generic text search—URI"

Body contains the actual content.

There are a variety of advantages to this approach (nested parsers being one of them), but the key is that I know the relationship between tag and content

hrefna,
@hrefna@hachyderm.io avatar

@smallcircles (As you might be able to tell, my general philosophy and bias on protocol design is that if it doesn't change how the object is either parsed or structured then it doesn't belong in the protocol at that level; under that model it's perfectly reasonable for the Body to have something that indicates the Content is an Article if the end processor knows how to process that, but you wouldn't declare the Message to be an Article because it doesn't change how the message is parsed)

smallcircles,
@smallcircles@social.coop avatar

@hrefna ah yes, that makes sense. And sorry as I parsed your toots without context of that thread you passed.

Then things are more in the general structure of:

  • Message
    ** Metadata
    ** Payload

Where the metadata indicates subprotocols for processing the payload.

smallcircles,
@smallcircles@social.coop avatar

@hrefna

I gave a look into the #CloudEvents spec, as I thought it was set up specifically mentioning subprotocols. I misremembered that, but it sorta kinda boils down to the same thing.

Reading the primer was interesting though. E.g. the layered architecture model of:

  • Base specification
  • Extensions
  • Format encodings
  • Protocol bindings

And also funny to find that Adobe once used #ActivityStreams in their event format (think now they use CloudEvents):

https://github.com/cloudevents/spec/blob/main/cloudevents/primer.md#adobe---io-events

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • ngwrru68w68
  • modclub
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • GTA5RPClips
  • JUstTest
  • tacticalgear
  • normalnudes
  • tester
  • osvaldo12
  • everett
  • cubers
  • ethstaker
  • anitta
  • provamag3
  • Leos
  • cisconetworking
  • megavids
  • lostlight
  • All magazines