brokenix,
@brokenix@emacs.ch avatar
  • Meaning of the linear arrow: f :: s ⊸ t guarantees that if (f u) is consumed exactly once,then the argument u is consumed exactly once(Consume exactly once).to consume exactly once • (like Int or Ptr) - just evaluate it.• unction value -apply it to one argument, consume its resultexactly once.• pair - pattern-match on it, consume each component exactlyonce.• adt, pattern-match on it,, all its linear components a linear arrow specifies how f uses its argument. It does not restrict
    arguments to which function can be applied.a linear f cannot assume that itis given unique pointer to its argument. For example, if f :: s ⊸ t, then this is fine:g :: s → tg x = f xThe type of g makes no particular guarantees about the way in which it uses x; in particular, g canpass that argument to f.result of (unsafeFreeze ma) is a new immutable array, but toavoid an unnecessary copy, it is actually ma. The intention is, of course, that that unsafeFreezeshould be the last use of the mutable array; but nothing stops us continuing to mutate it further,with quite undefined semantics. The “unsafe” in the function name is a ghc convention meaning“ programmer has a proof obligation here that the compiler cannot check”.other unsatisfactory thing about the monadic approach to array construction is that it isoverly sequential. Suppose you had a pair of mutable arrays, with some updates to perform to each;these updates could be done in parallel, but the ST monad would serialise them

• Operations on files remain monadic, unlike the case with mutable arrays. I/O operations
affect world, hence must be sequenced. It is not enough to sequence operations on
files individually, as it was for arrays.
• generalise the IO monadst it expresses whether or not returned value is linear.
extra multiplicity type parameter p to the monad IOL, where p can be 1 or ω,
indicating a linear or unrestricted result, respectively. Now openFile returns IOL 1 File, the “1”
indicating that the returned File must be used linearly.
• As before, operations on linear values must consume their input and return a new one; here
readLine consumes the File and produces a new one.
• Unlike the File, the ByteString returned by readLine is unrestricted, and the type of readLine
indicates this.

  • Uniqueness (or ownership) types ensure that an argument of a function is not used elsewhere in an expression’s context even if the callee can work with the argument as it pleases. Linear types and uniqueness types have a ‘weak duality:’ ‘Seen as a system of constraints, uniqueness typing is a non-aliasing analysis, while linear typing provides a cardinality analysis.’1
    • API for pure mutable arrays, the original Linear #Haskell paper [2] (Arxiv version) featured the function:newMArray :: Int -> (MArray a %1 -> Ur b) %1 -> Ur b
    • Taking a linear function as an argument, newMArray can make sure that the MArray is, in fact, not aliased. This is because linear types only restrict how a function can use an argument, it can’t restrict how the context uses an argument or a returned value. Contrast with uniqueness types (aka ownership types) in languages like Rust or Clean, which can demand that a value not be aliased. So, whenever we use linear types, in Haskell, to implement an API with a uniqueness requirement, we end up with such a newX function that takes a linear-function argument With newMArray and newMArrayBeside we have two functions to create new arrays. This isn’t satisfactory, but it gets worse. If I have another type with uniqueness constraints, say a ring buffer, then I’ll eventually need the ability to allow a ring buffer to piggyback on an array’s scope, and an array to piggyback on a ring buffer’s scope. with n types that can only be used linearly, I’ll need n^ 2 new functions. This is simply not sustainable. 3
    • of the many problems with Monads, one of them is that they are not composable, and Monad transformers are just a hack, you still have a stack of monads and how they interact when control flow gets involved can be hard to predict. Compare this with uniqueness typing (and syntax sugars for in/out arguments) and it's trivial to have multiple states used by the same function 4
brokenix,
@brokenix@emacs.ch avatar

whose constructor (:) uses linear arrows:


  • a declaration of Linear Haskell’s list type, ust as with pairs, this is not a new, linear list type: thisis Linear Haskell’s list type, and all existing Haskell functions will work over it perfectly well<br></br>,* many <b>list-based functions are<br></br> linear </b> , and can be given a more precise type. For example<br></br>we can write (++) as follows:<br></br> * This type says that if (xs ++ ys) is consumed exactly once, then<br></br>xs is consumed exactly once, and so is ys, and indeed our type<br></br>system will accept this definition.<br></br> * giving a more precise type to (++) only strengthens the contract that (++) offers to its<br></br>callers; it does not restrict its usage.<br></br>

sum :: [Int] ⊸ Int  
f :: [Int] ⊸ [Int] → Int  
f xs ys = sum (xs ++ ys) + sum ys

  • Here two arguments to (++) have different multiplicities, but the function f guarantees that it<br></br>will consume xs exactly once if (f xs ys) is consumed exactly once.<br></br>For an existing language, being able to strengthen (++), and similar functions, in a backwardscompatible way is a huge boon. Of course, not all functions are linear: a function may legitimately<br></br>demand unrestricted input. For example, the function f above consumes ys twice, and so f needs an<br></br>unrestricted arrow for that argument.<br></br>* we can use the very same pairs and lists types to contain linear values (such as <b>mutable<br></br>arrays</b>) without compromising safety. <br></br>

upd :: (MArray Char, MArray Char) ⊸ Int → (MArray Char, MArray Char)  
upd (a1, a2) n | n ⩾ 10 = (write a1 n ’x’, a2)  
| otherwise = (write a2 n ’o’, a1)

  • we want to pass a linear MArray and an unrestricted Int to a function f. We could give f<br></br>the signature f::MArray Int⊸Int → MArray Int. But suppose we wanted to uncurry function;<br></br>we could then give it the type<br></br>f :: (MArray Int, Int) ⊸ MArray Int<br></br>* But this is no good: now f is only allowed to use the Int linearly, but it might actually use it many<br></br>times. For this reason it is extremely useful to be able to declare data constructors with non-linear<br></br>types, <br></br>

data PLU a b where {PLU :: a ⊸ b → PLU a b}  
f :: PLU (MArray Int) Int ⊸ MArray Int

  • <b> gadt</b>-style syntax to give an explicit type signature to the data constructor PLU, with<br></br>mixed linearity. Now, when constructing a PLU pair the type of the constructor means that we must<br></br>always supply an unrestricted second argument; and dually when pattern-matching on PLU we are<br></br>therefore free to use the second argument in an unrestricted way, even if the PLU value itself is<br></br>linear.<br></br>Instead of defining a pair with mixed linearity, we can also write<br></br>data Unrestricted a where<br></br>

{Unrestricted :: a → Unrestricted a}  
f :: (MArray Int,Unrestricted Int) ⊸ MArray Int

The type (Unrestricted t) is very much like “!t” in linear logic, but in our setting it is just an ordinary<br></br>user-defined datatype. where the result of read was a pair of a linear<br></br>MArray and an unrestricted array element:<br></br>


read :: MArray a ⊸ Int → (MArray a,Unrestricted a)

  • <b>Multiplicity polymorphism </b><br></br>A linear function provides more guarantees to its caller than a non-linear one — it is more general.<br></br>But the higher-order case thickens the plot. Consider the standard map function over (linear) lists:<br></br>

map f [ ] = [ ]  
map f (x : xs) = f x : map f xs

  • It can be given the two following incomparable types: (a⊸b) → [a]⊸[b] and (a → b) → [a] →<br></br>[b]. Thus, Linear Haskell features quantification over multiplicities and parameterised arrows<br></br>(A →q B). Using these, map can be given following more general type: ∀p. (a →p b) →<br></br>[a] →p [b]. , function <b>composition and foldl </b>can be given following<br></br>general types<br></br>

foldl :: ∀p q. (a →p b →q a) → a →p [b] →q a  
(◦) :: ∀p q. (b →p c) ⊸ (a →q b) →p a →p·q c  
(f ◦ g) x = f (g x)

  • type of (◦) says that two functions that accept arguments of arbitrary multiplicities (p , q<br></br>respectively) can be composed to form a function accepting arguments of multiplicity p · q (i.e. <br></br>product of p , q)<br></br>IOL is just a generalisation of <br></br>IO monad, thus:<br></br>

type IOL p a  
returnIOL  
:: a →p IOL p a  
bindIOL  
:: IOL p a ⊸ (a →p IOL q b) ⊸ IOL q b

idea is that if m :: IOL 1 t, then m is an input/output computation that returns a linear value of<br></br>type t. But what does it mean to “return a linear value” in a world where linearity applies only to<br></br>function arrows? Fortunately, in the world of monads each computation has an explicit continuation,<br></br>so we just need to control the linearity of the continuation arrow. More precisely, in an application<br></br>m bindIOL k where m :: IOL 1 t, we need the continuation k to be linear, k :: t ⊸ IOL q t. And that<br></br>is captured by the multiplicity-polymorphic type of bindIOL<br></br>

brokenix,
@brokenix@emacs.ch avatar
  • bind and return combinators of IOL can be used in the familiar way. The difference with the usual monad is that multiplicities may be mixed, but this poses no problem in practice. printHandle :: File ⊸ IO ω ()printHandle f = do{ (f,Unrestricted b) ← atEOF f -- atEOF :: File ⊸ IOL 1 (File,Unrestricted Bool); if b then closeFile f -- closeFile :: File ⊸ IOL ω ()else do { (f,Unrestricted c) ← read f -- read :: File ⊸ IOL 1 (File,Unrestricted Char); putChar c -- putChar :: Char → IOL ω (); printHandle f } }
    • Here atEOF , read return a linear File that should be closed, but close , putChar return anordinary non-linear (). So this sequence of operations has mixed linearity. Nevertheless, we caninterpret the do-notation with bindIOLread f ‘bindIOL‘ λ(f,Unrestricted c) →putChar c ‘bindIOL‘ λ_ →...
  • Such an interpretation of the do-notation requires Haskell’s -XRebindableSyntax extension, butif linear I/O becomes commonplace it would be worth considering a more robust solution.Internally, hidden from clients, ghc actually implementsIO as a function, and that implementationtoo is illuminated by linearity, like sodata Worldnewtype IOL p a = IOL {unIOL :: World ⊸ IORes p a}data IORes p a whereIOR :: World ⊸ a →p IORes p a
  • since a linear function consumes its argument exactly once, then itmust also be strict. But not so!
    f :: a ⊸ (a, Bool)f x = (x, True)Here f is certainly linear ,given type of (, ) . That is, if (f x)is consumed exactly once, then each component of its result pair is consumed exactly once,
    hence x is consumed exactly once. But f is certainly not strict: f ⊥ is not ⊥.Each binding in Γ, of form x :π A, includes a multiplicityπ . The familiar judgement Γ ⊢ t : A should be read as followsΓ ⊢ t : A asserts that consuming term t : A exactly once will consume each binding(x :π A) in Γ with its multiplicity π.
  • types in Γ as inputs of judgement,multiplicities as outputs. equivalence of multiplicities?
  • equivalence of multiplicities is smallest transitive , reflexive relation, which obeys :• + and · are associative and commutative• 1 is unit of ·• · distributes over +• ω · ω = ω• 1 + 1 = 1 + ω = ω + ω = ω
  • Thus, multiplicities form a semi-ring (without a zero), which extends to a module structure ontyping contexts
    • Type preservation. If a is well typed, and a ⇓ b, or a ⇓∗b then b is well-typed
    • Progress Evaluation does not block. That is, for any partial evaluation a ⇓∗b,where a is well-typed, the derivation can be extended.In-place update & typestate. + linear types can be used to implement some operationsas in-place updates, and typestates (like whether an array is mutable or frozen) are actually enforcedby type system
  • Type preservation). For any well-typed σ, if σ ⇓ τ or σ ⇓∗τ , then τ is well-typed.
  • Evaluation does not block. That is, for any partial evaluation σ ⇓ ∗ τ , for σ well-typed, the evaluation can be extended. In particular, typestates need not be checked dynamicallyCase rule. Thanks to caseω, we can use linear arrows on all data types. Indeed we can write the following and have fst typecheck: data Pair a b where Pair :: a ⊸ b ⊸ Pair a b fst :: Pair a b → a fst x = caseω x of Pair a b → a It is possible to do without caseω, and have only case1. Consider fst again. We could instead have data Pair p q a b where Pair :: a →p b →q Pair p q a b fst :: Pair 1 ω a b ⊸ a fst x = case1 x of Pair a b → a But now multiplicity polymorphism infects all basic datatypes (such as pairs), with knock-on consequences. Moreover, let is annotated so it seems reasonable to annotate case in the same way.
  • ie caseω allows us to meaningfully inhabit ∀a b. Unrestricted (a, b) ⊸ (Unrestricted a,Unrestricted b), while linear logic does not. Subtyping. Because the type A ⊸ B only strengthens the contract of its elements compared to A → B, one might expect the type A ⊸ B to be a subtype of A → B
  • if f :: Int ⊸ Int g :: (Int → Int) → Bool then the call (g f) is ill-typed, even though f provides more guarantees than g requires. On the other hand, g might well be multiplicity-polymorphic, with type ∀p. (Int →p Int) → Bool; in which case (g f) is, , typeable while (g f) is ill-typed in λ q →, it is accepted in Linear Haskell.
  • reason - η-expansion g (λx → f x) is typeable, Linear Haskell perform this expansions during type inference. Such an η-expansion is not completely semantics-preserving as λx → g x is always a well-defined value, even if g loops: this difference can be observed with Haskell’s seq operator.

image/png

brokenix,
@brokenix@emacs.ch avatar

Nevertheless, such η-expansions are already standard practice in ghc: a similar situation arises when using rank-2 types. core language of #ghc does not accept g f for

g :: (∀a. (Eq a, Show a) ⇒ a → Int) → Char<br></br>f :: ∀a. (Show a, Eq a) ⇒ a → Int<br></br>

surface language, again, accepts g f, and elaborates it into g (λa (d1 :: Eq a) (d2 :: Show a) →
f a d2 d1). We simply extend this mechanism to linear types.
Polymorphism & multiplicities. Consider the definition: “id x = x”. Our typing rules would
validate both id :: Int → Int and id :: Int ⊸ Int. So, since we think of multiplicities ranging over
{1,ω}, surely we should also have id :: ∀p. Int →p Int? But as it stands, our rules do not accept it.
To do so we would need x :p Int ⊢ x : Int. Looking at the (var) rule in Fig. 6, we can prove that
premise by case analysis, trying p = 1 and p = ω. But if we had a domain of multiplicities which
includes 0 (see Sec. 7.2), we would not be able to prove x :p Int ⊢ x : Int, and rightly so because it
is not the case that id :: Int →0 Int.
More generally, we could type more programs if we added more laws relating variables in
such as p + q = ω, but this would prevent potential extensions to the set of multiplicities
Divergence. Consider this definition6

f :: [Int] ⊸ [Int]<br></br>f xs = repeat 1 ++ xs<br></br>

Does f really consume its argument xs exactly once? After all, (repeat 1) is infinite so f
will never evaluate xs at all
In corner cases like this we look to metatheory. Yes, the typing rules give the specified types
for (++) and f.

  • Yes, the operational claims guaranteed by the metatheory remain valid. Intuitivelyyou may imagine it like this: linearity claims that if you were consume the result of f completely,exactly once, then you would consume its argument once; but since the result of f is infinite wecannot consume it completely exactly once, so the claim holds vacuouslyr to implement the linear arrow, we added a multiplicity annotation to function arrows as in λ q →. The constructor for arrow types is constructed and destructed frequently in #ghc’s type checker, and this accounts for most of the modifications to existing code
  • multiplicities are an output of the type inference algorithm. Inorder to infer the multiplicities coming out of a case expression we need a way to aggregate themultiplicities coming out of the individual branches. To this effect, we compute, for every variable,the join of its multiplicity in each branch.
  • #bsd socket api is a standard, if not the standard, through which computers connect overnetworks. It involves a series of actions which must be performed in order: on server-side,a freshly created socket must be bound to an address, then start listening incoming traffic, thenaccept connection requests; said connection is returned as a new socket, this new socket can nowreceive traffic. as precise sequence of actions is protocol-dependent. For tcp traffic you would do as described, but for udp, which does not need connections, you would not accept a connection but receive messages directly.
    • socket library for #Haskell, exposes precisely this sequence of actions. Programming with itis exactly as clumsy as socket libraries for other languages: after each action, the state of the socket changes, as do permissible actions, but these states are invisible in the types. pure interface takes an update function u : Scene → Scene which is tasked withreturning the next state that the screen will display. The scene is first converted to a pure treewhere each node keeps, along with the pure data, a pointer to its imperative counterpart when itapplies, or Nothing for new nodes.data Node = Node {payload :: Int,ref :: Maybe (IORef ImperativeNode), children :: [Node] }On each frame, SpriteKit applies u to the current scene, and checks if a node n was updated. If itwas, it applies the update directly onto ref n or creates a new imperative node.Things can go wrong though: if the update function duplicates any proxy node, one gets thesituation where two nodes n and n’ can point to the same imperative source ref n = ref n’, buthave different payloads. In this situation the Scene has become inconsistent and the behaviour ofSpriteKit is unpredictable
brokenix,
@brokenix@emacs.ch avatar
  • Linearity via ST monads{ phantom type parameter s} -( region) that is instantiated once at start ofcomputation by a runST function of type:runST :: ∀a. (∀s. ST s a) → aresources that are allocated during computation,eg mutable cell references,cannot escape the dynamic scope of the call to runST because they are themselves tagged with thesame phantom type parameter.Region-types. With region-types such as ST, we cannot express typestates, but this is sufficientto offer a safe api for freezing array or ensuring that files are eventually closed. This simplicity(one only needs rank-2 polymorphism) comes at a cost: we’ve already mentionned in Sec. 2.2 that itforces operations to be more sequentialised than need be, but more importantly, it does not supportprima facie the interaction of nested regions.Kiselyov and Shan [2008] show that it is possible to promote resources in parent regions toresources in a subregion. But this is an explicit and monadic operation, forcing an unnatural imperative style of programming where order of evaluation is explicit. The HaskellR project [Boespfluget al. 2014] uses monadic regions in the style of Kiselyov and Shan to safely synchronise valuesshared between two different garbage collectors for two different languages. Boespflug et al. reportthat custom monads make writing code at an interactive prompt difficult, compromises code reuse,forces otherwise pure functions to be written monadically and rules out useful syntactic facilitieslike view patterns.In contrast, with linear types, values in two regions hence can safely be mixed: elements can bemoved from one data structure (or heap) to another, linearly, with responsibility for deallocationtransferred along.t everything which must be single-threaded – and that we would track with linearity – becomepart of the state of the monad. For instance, coming back to the sockets of Sec. 5.2, the type of bindwould be as follows:bind :: (sock :: Var) → SocketAddress → ST IO () [sock ::: Socket Unbound :7→ Socket Bound]Where sock is a reference into the monads’s state, and Socket Unbound is the type of sock beforebind, and Socket Bound, the type of sock after bind.Idris uses its dependent types to associate a state to the value of its first argument. Dependenttypes are put to even greater use for error management where the state of the socket depends onwhether bind succeeded or not:-- In Idris, bind uses a type-level function (or) to handle errorsbind :: (sock :: Var) → SocketAddress →ST IO (Either () ()) [sock ::: Socket Unbound :7→ (Socket Bound ‘or‘ Socket Unbound)]-- In Linear Haskell, by contrast, the typestate is part of the return typebind :: Socket Unbound ⊸ SocketAddress → Either (Socket Bound) (Socket Unbound)The support for dependent types in #ghc is not as comprehensive as #Idris . But it is conceivable toimplement such an indexed monad transformer in #Haskell. an existing lazy language, such as Haskell, can be extended withlinear types, without compromising the language, in the sense that:• existing programs are valid in the extended language without modification,• such programs retain the same operational semantics, and in particular• the performance of existing programs is not affected,• yet existing library functions can be reused to serve the objectives of resource-sensitiveprograms with simple changes to their types, and no code duplication.https://arxiv.org/pdf/1710.09756.pdf
  • All
  • Subscribed
  • Moderated
  • Favorites
  • haskell
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines