# New Summit Page

# Second Statebox Summit

We are delighted to announce the Second Statebox Summit that will take place in **September (8th-15th)** in The Netherlands.

The Summit will have the following structure:

**A category theory camp (September 8th-9th)**, to put people unfamiliar with category theory up to speed for the subsequent workshop. The camp is invitation-only. Details will follow soon.**A workshop (September 10th-14th)**, heavily research-focused and invitation-only. The research workshop will be hosted in Schoorl, the Netherlands.**A Meetup (September 15th)**in Amsterdam with speakers from academia, the hacker culture and the business world. The Meetup will be open to everyone and will be a great opportunity to network with interesting people from various backgrounds. The Meetup timeline is still in development and will be made public in the near future.

If you are interested in speaking at the Meetup in Amsterdam, shoot us an email.

## Category theory camp (September 8th-10th)

#### + Camp Participants

Jack Ek | |||||||

Denis Erfurt | |||||||

Fabrizio Romano Genovese | |||||||

Alexander Gryzlov | |||||||

Anton Livaja | |||||||

Jonathan Paprocki | |||||||

Erik Post | |||||||

Jason Teutsch | |||||||

Daniël van Dijk | |||||||

Arian van Putten |

^ collapse section ^

**List of lectures and live stream [forthcoming]**

The Category theory camp is directed at workshop attendees who are not familiar with category theory, and has the purpose of helping them achieve a decent understanding of the topic. The subsequent workshop will make a pervasive use of category theory, and it is fundamental that people are able to understand each other to work proficiently. The camp is in particular directed to people with experience in functional programming and type theory, and lectures will be designed accordingly.

## Research workshop (September 10th-14th)

#### + Research Workshop Participants

^ collapse section ^

**List of talks and live stream [forthcoming]**

The workshop will address the following research topics, organized in macro-groups:

#### + Compositional Models for Petri Nets, Sept. 10 to Sept. 14

Statebox, as a programming language, is particularly concerned with separating code topology from code meaning. We believe that software design should happen in two stages; Firstly, we define the overall geometry of the software, and only after that we populate this geometry with code snippets that perform particular actions. Note how this reflects the way every programmer works: first one sketches on paper what the code “should do” and then this sketch is used as a guideline to produce the code itself. We are of the opinion that this natural routine should be made formal.

To do this, we decided to use Petri nets as the basic tool to describe software topology, for the following reasons:

[1] Sassone, V. (1995). On the Category of Petri Net Computations. In P. D. Mosses, M. Nielsen, & M. I. Schwartzbach (Eds.), TAPSOFT ’95: Theory and Practice of Software Development (Vol. 915, pp. 334348). Berlin, Heidelberg: Springer Berlin Heidelberg.

[2] Bruni, R., Melgratti, H., Montanari, U., & Sobocinski, P. (2013). Connector Algebras for C/E and P/T Nets’ Interactions. Logical Methods in Computer Science, 9(3).

[3] Fong, B. (2015). Decorated Cospans. Theory and Applications of Categories, 30(33).

[4] Köhler-Bußmeier, M. (2014). A Survey of Decidability Results for Elementary Object Systems. Fundamenta Informaticae, (1), 99-123. https://doi.org/10.3233/FI-2014-983

[5] Baldan, P., Bonchi, F., & Gadducci, F. (2009). Encoding Asynchronous Interactions Using Open Petri Nets. In M. Bravetti & G. Zavattaro (Eds.), CONCUR 2009 - Concurrency Theory (Vol. 5710, pp. 99114). Berlin, Heidelberg: Springer Berlin Heidelberg.

[6] Amadio, R. M., Castellani, I., & Sangiorgi, D. (1996). On Bisimulations for the Asynchronous π-Calculus. In U. Montanari & V. Sassone (Eds.), CONCUR ’96: Concurrency Theory (Vol. 1119, pp. 147162). Berlin, Heidelberg: Springer Berlin Heidelberg.

[7] Hoare, C. A. R. (1985). Communicating Sequential Processes. Englewood Cliffs, US: Prentice-Hall.

To do this, we decided to use Petri nets as the basic tool to describe software topology, for the following reasons:

- Petri nets are inherently visual, making process design and debugging much easier.
- Petri nets admit a categorical formalization (as, for instance, in [1]), with which we can map net transitions into a semantics via functors. This realizes the unification between software topology and software meaning that we wanted to achieve.

- Compositionality for Petri nets has been realized categorically in different ways, mainly in terms of transition synchronization [2] and place-merging using the cospan formalism [3].
- Recursion can be realized in terms of nets within nets [4], with the evident disadvantage of losing decidability of important properties such as liveness, boundedness and reachability.

- Categorification of [5];
- Assessing the real potential of [5] in terms of expressivity in protocol design;
- Assessing the challenges in implementing petri-net based protocol design tools, and finding solutions.

**Useful reads:**[1] Sassone, V. (1995). On the Category of Petri Net Computations. In P. D. Mosses, M. Nielsen, & M. I. Schwartzbach (Eds.), TAPSOFT ’95: Theory and Practice of Software Development (Vol. 915, pp. 334348). Berlin, Heidelberg: Springer Berlin Heidelberg.

[2] Bruni, R., Melgratti, H., Montanari, U., & Sobocinski, P. (2013). Connector Algebras for C/E and P/T Nets’ Interactions. Logical Methods in Computer Science, 9(3).

[3] Fong, B. (2015). Decorated Cospans. Theory and Applications of Categories, 30(33).

[4] Köhler-Bußmeier, M. (2014). A Survey of Decidability Results for Elementary Object Systems. Fundamenta Informaticae, (1), 99-123. https://doi.org/10.3233/FI-2014-983

[5] Baldan, P., Bonchi, F., & Gadducci, F. (2009). Encoding Asynchronous Interactions Using Open Petri Nets. In M. Bravetti & G. Zavattaro (Eds.), CONCUR 2009 - Concurrency Theory (Vol. 5710, pp. 99114). Berlin, Heidelberg: Springer Berlin Heidelberg.

[6] Amadio, R. M., Castellani, I., & Sangiorgi, D. (1996). On Bisimulations for the Asynchronous π-Calculus. In U. Montanari & V. Sassone (Eds.), CONCUR ’96: Concurrency Theory (Vol. 1119, pp. 147162). Berlin, Heidelberg: Springer Berlin Heidelberg.

[7] Hoare, C. A. R. (1985). Communicating Sequential Processes. Englewood Cliffs, US: Prentice-Hall.

^ collapse section ^

#### + Consensus and Sheaves, Sept. 10 to Sept. 14

The study of consensus protocols for decentralized systems has become a very active topic of research in the last decade. The original PoW-based consensus protocol by Nakamoto [1] has evolved into different propositions to overcome its downsides (such as energy consumption and lack of scalability). This resulted in PoS-based consensus, such as [2], and DAG-based consensus, as in [3] and [4]. These two approaches are quite different:

On the other hand, sheaf theory [5] has been successfully applied to data representation and analysis [6] thanks to the inherent capacity that sheaves have in representing conflicting information. At Statebox, we are convinced that the underlying theory of consensus protocols can be neatly categorified using sheaves, and we want to bootstrap this research endeavor at the workshop. In particular, in the DAG-based approach it is clear that the properties of the consensus protocol allow for information to be locally consistent but globally inconsistent, a perspective that has striking similarities with the one adopted in sheaf-theoretic contextuality [7].

Our gut feeling (Fabrizio talking here, and I may be completely wrong) is that a generic consensus protocol could be characterized sheaf-theoretically as follows:

At the moment, this is nothing more than an idea, and we will need to put in some solid brain work to turn it into something more. At the workshop, the goals for this group will be:

[1] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System, 1-9.

[2] Buterin, V., & Griffith, V. (2017). Casper the Friendly Finality Gadget. ArXiv:1710.09437 [Cs].

[3] Sompolinsky, Y., Lewenberg, Y., & Zohar, A. SPECTRE. Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections (pp. 1-66).

[4] Sompolinsky, Y., & Zohar, A. PHANTOM: A Scalable BlockDAG protocol (pp. 1-26).

[5] Mac Lane, S., & Moerdijk, I. (1992). Sheaves in Geometry and Logic, a First Introduction to Topos Theory. (S. Axler, F. W. Gehring, & K. A. Ribet, Eds.) (Vol. 13). New York: Springer.</br> [6] Robinson, M. (2014). Topological Signal Processing. Berlin, Heidelberg: Springer Berlin Heidelberg.

[7] Abramsky, S., & Brandenburger, A. (2011). The Sheaf-Theoretic Structure Of Non-Locality and Contextuality. New Journal of Physics, 13(11), 113036.

- PoS-based consensus represents data in the form of a linearly ordered chain of blocks, where adding inconsistent information causes the chain to fork. Consensus here means having a deterministic and reliable procedure to always establish which chain is the “true” one, i.e. a procedure to decide which branch is the one to be used.
- In DAG-based consensus, instead, information is organized into a directed acyclic graph of blocks. Blocks that represent contradictory information are still allowed to live on it (inconsistent information doesn’t cause forking), and consensus is reached by means of a “voting procedure” where the value of each block is calculated in terms of graph-theoretic properties. If the same information (i.e. “how much money do I own?”) is reported by different blocks inconsistently, the voting procedure deterministically allows to decide which block is the one to use.

On the other hand, sheaf theory [5] has been successfully applied to data representation and analysis [6] thanks to the inherent capacity that sheaves have in representing conflicting information. At Statebox, we are convinced that the underlying theory of consensus protocols can be neatly categorified using sheaves, and we want to bootstrap this research endeavor at the workshop. In particular, in the DAG-based approach it is clear that the properties of the consensus protocol allow for information to be locally consistent but globally inconsistent, a perspective that has striking similarities with the one adopted in sheaf-theoretic contextuality [7].

Our gut feeling (Fabrizio talking here, and I may be completely wrong) is that a generic consensus protocol could be characterized sheaf-theoretically as follows:

- The information, that may contain inconsistent statements, is organized as a presheaf;
- A consensus protocol is a way to lift this presheaf into a sheaf, where some properties have to hold.

At the moment, this is nothing more than an idea, and we will need to put in some solid brain work to turn it into something more. At the workshop, the goals for this group will be:

- Review [1], [2], [3] and [4] from a sheaf-theoretic perspective;
- Propose a generalized consensus theory based on sheaves;
- Assess which desirable properties for consensus algorythms can be captured sheaf-theoretically, and if this is enough to make further endeavors worth;
- Assess the feasibility of the sheaf-theoretic method implementation-wise, that is: Is there an efficient way to turn a sheaf-based model into code? What are the difficulties we have to overcome to reach this goal?

**Useful reads:**[1] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System, 1-9.

[2] Buterin, V., & Griffith, V. (2017). Casper the Friendly Finality Gadget. ArXiv:1710.09437 [Cs].

[3] Sompolinsky, Y., Lewenberg, Y., & Zohar, A. SPECTRE. Serialization of Proof-of-work Events: Confirming Transactions via Recursive Elections (pp. 1-66).

[4] Sompolinsky, Y., & Zohar, A. PHANTOM: A Scalable BlockDAG protocol (pp. 1-26).

[5] Mac Lane, S., & Moerdijk, I. (1992). Sheaves in Geometry and Logic, a First Introduction to Topos Theory. (S. Axler, F. W. Gehring, & K. A. Ribet, Eds.) (Vol. 13). New York: Springer.</br> [6] Robinson, M. (2014). Topological Signal Processing. Berlin, Heidelberg: Springer Berlin Heidelberg.

[7] Abramsky, S., & Brandenburger, A. (2011). The Sheaf-Theoretic Structure Of Non-Locality and Contextuality. New Journal of Physics, 13(11), 113036.

^ collapse section ^

#### + Logic Programming and Hardware, Sept. 10 to Sept. 14

Besides developing a language and runtime software implementation, Statebox is interested in developing a chip capable of executing Statebox protocols. We have particular requirements regarding correctness and security: We want to precisely control and observe what the chip is doing to avoid bugs and security issues.

We will consider the problem both mathematically and empirically: We will bring various devices that can be useful for this, such as a software defined radio device to generate and sample signals, a scope/signal generator, amplifier, multiplexers and target chips for which we have layer masks, such as 6502 and 555. We will moreover bring an FPGA that can be programmed using an entirely free-software toolchain called 'Icestorm' [1].

Some of the paths we will explore are:

Programmable logic chips are devices that allow the uploading of a digital circuit made. They achieve this by connecting many small digital circuits, called *cells*, with a reconfigurable switching bus. A hardware description language such as VHDL [2] is used to design circuits for such a chip. It is a dataflow language, so its programs look like PROPs over stateful digital circuits and binary signals on the wires.

Statebox exhibits a diagrammatic structure similar to what is used in [3]. This suggests two extensions to the Statebox system:

Besides executing Petri nets, we want to do computation (folds). These can be expressed as (compositional) logical queries and executed by an information-gain computation. Anthony Di Franco's approach of logic program evaluation [4] suggests an alternative way to construct a general processing unit: Replacing the cells of a FPGA with something that can perform information-gain specific tasks.

This will be made possile by projects aiming to define a free FPGA [5]. If such a device is realised, it could lead to a new architecture of 'declaratively programmable' chips, acting much more like pure mathematical functions than operationally, like a Turing machine.

During this research workshop we want to start placing this thinking on solid categorical grounds.

How can we attest (cheaply) that our chip does precisely what it should and is in no way tampered with or otherwise broken? And does the chip leak information when it is operated outside of the electrical specifications? [6]

There is a lot of centralization and trust involved in semiconductors: Chip designs and toolchains are usually proprietary closed systems, with severe security implications [7]. Once built, there is no way of fully verifying the behaviour of a chip without destructively inspecting it, with no guarantees that possible hidden features and bugs will be uncovered in doing so.

We want to design an apparatus that can confirm some fingerprint of the chip by probing it outside of its specified range and measuring the effects of this on various pins. Hopefully we might be able to construct a signature capable of distinguishing topological properties of the die layers/masks.

We can take inspiration from:

[1] Icestorm Website.

[2] Wikipedia: VHSIC Hardware Description Language.

[3] Ghica, D.R., & Jung, A. (2016). Categorical semantics of digital circuits. In R. Piskac, & M. Talupur (Eds.), Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer Berlin.

[4] Di Franco, A. (2017). Information-Gain Computation. ArXiv:1707.01550 [Cs].

[5] Libresilicon Website.

[6] Bernstein, D.J. (2005). Cache-timing attacks on AES.

[7] Vaughan-Nichols, S.J. (2017). MINIX: Intel's hidden in-chip operating system.

[8] Wikipedia: Impulse Response.

[9] Candès, E. J., Romberg, J. K., Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. ArXiv:0503066 [Math].

[10] Wikipedia: Tomographic Reconstruction.

We will consider the problem both mathematically and empirically: We will bring various devices that can be useful for this, such as a software defined radio device to generate and sample signals, a scope/signal generator, amplifier, multiplexers and target chips for which we have layer masks, such as 6502 and 555. We will moreover bring an FPGA that can be programmed using an entirely free-software toolchain called 'Icestorm' [1].

Some of the paths we will explore are:

**Compositional Synthesis**Programmable logic chips are devices that allow the uploading of a digital circuit made. They achieve this by connecting many small digital circuits, called *cells*, with a reconfigurable switching bus. A hardware description language such as VHDL [2] is used to design circuits for such a chip. It is a dataflow language, so its programs look like PROPs over stateful digital circuits and binary signals on the wires.

Statebox exhibits a diagrammatic structure similar to what is used in [3]. This suggests two extensions to the Statebox system:

- Replacing Petri nets in a Statebox diagram with digital circuits and constructing a VHDL program which synthesizes into the circuit;
- Translating Petri nets into digital circuits and using the point above to synthesize a chip design.

**Information-Gain Computing Unit**Besides executing Petri nets, we want to do computation (folds). These can be expressed as (compositional) logical queries and executed by an information-gain computation. Anthony Di Franco's approach of logic program evaluation [4] suggests an alternative way to construct a general processing unit: Replacing the cells of a FPGA with something that can perform information-gain specific tasks.

This will be made possile by projects aiming to define a free FPGA [5]. If such a device is realised, it could lead to a new architecture of 'declaratively programmable' chips, acting much more like pure mathematical functions than operationally, like a Turing machine.

During this research workshop we want to start placing this thinking on solid categorical grounds.

**Hardware Attestation**How can we attest (cheaply) that our chip does precisely what it should and is in no way tampered with or otherwise broken? And does the chip leak information when it is operated outside of the electrical specifications? [6]

There is a lot of centralization and trust involved in semiconductors: Chip designs and toolchains are usually proprietary closed systems, with severe security implications [7]. Once built, there is no way of fully verifying the behaviour of a chip without destructively inspecting it, with no guarantees that possible hidden features and bugs will be uncovered in doing so.

We want to design an apparatus that can confirm some fingerprint of the chip by probing it outside of its specified range and measuring the effects of this on various pins. Hopefully we might be able to construct a signature capable of distinguishing topological properties of the die layers/masks.

We can take inspiration from:

- Impulse reponse techniques, as in listening to the reverberation sound of a gun fired in a space [8];
- The single pixel camera approach developed in [9], considering a single high resolution oscilloscope signal as the pixel and different activations patterns as the light sources;
- Tomographic reconstruction, considering the chip as some sort of non-linear space and taking an X-ray through the pins [10].

**Useful reads:**[1] Icestorm Website.

[2] Wikipedia: VHSIC Hardware Description Language.

[3] Ghica, D.R., & Jung, A. (2016). Categorical semantics of digital circuits. In R. Piskac, & M. Talupur (Eds.), Proceedings of the 16th Conference on Formal Methods in Computer-Aided Design. Springer Berlin.

[4] Di Franco, A. (2017). Information-Gain Computation. ArXiv:1707.01550 [Cs].

[5] Libresilicon Website.

[6] Bernstein, D.J. (2005). Cache-timing attacks on AES.

[7] Vaughan-Nichols, S.J. (2017). MINIX: Intel's hidden in-chip operating system.

[8] Wikipedia: Impulse Response.

[9] Candès, E. J., Romberg, J. K., Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. ArXiv:0503066 [Math].

[10] Wikipedia: Tomographic Reconstruction.

^ collapse section ^

#### + Open Games and Cryptoeconomics, Sept. 12 to Sept. 14

Open games [1] are a new, compositional framework for game theory. In particular, open games allow to infer the existence of Nash equilibria in a game compositionally, that is, considering a game as a composition of sub-games.

Intuitively, a Nash equilibrium [2] in a game denotes a strategy from which no player has interest to deviate, because this would translate to a loss of profit (whatever “profit” means depends on the game). A well known example of what Nash equilibria are and how they can sometimes be counterintuitive is given by the famous prisoner’s dilemma.

For a long time it has been thought that games weren’t compositional, since an optimal strategy for a game may cease to be optimal if the game is changed/extended in any way (for instance adding more players). Open games revert this point of view showing how equilibria can indeed be inferred from the sub-games the game is composed of.

This field of research is very young and, albeit it is developing incredibly quickly, many problems have still to be solved. For instance, the mathematical machinery developed up to now is still not ready to tackle the issue of repeated games in a satisfactory way.

The Statebox team thinks that open games will become incredibly valuable for (crypto)-economic analysis in the future, since their inherent compositionality allows them to scale whereas classical game theory becomes unusable for games beyond a certain level of complexity, which is too low to model any meaningful real scenario. For the same reasons, games could be useful for protocol analysis.

The relationship between Statebox and open games is still not clear, but it is for sure worth investigating. We will mainly focus on the following links:

[1] Ghani, N., Hedges, J., Winschel, V., & Zahn, P. (2016). Compositional Game Theory. ArXiv:1603.04641 [Cs].

[2] Fundenberg, D., & Tirole, J. (1991). Game Theory. Boston, MA: MIT Press.

[3] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System, 1-9.

[4] Papadimitriou, C. H. (1994). On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence. Journal of Computer and System Sciences, 48(3), 498–532.

Intuitively, a Nash equilibrium [2] in a game denotes a strategy from which no player has interest to deviate, because this would translate to a loss of profit (whatever “profit” means depends on the game). A well known example of what Nash equilibria are and how they can sometimes be counterintuitive is given by the famous prisoner’s dilemma.

For a long time it has been thought that games weren’t compositional, since an optimal strategy for a game may cease to be optimal if the game is changed/extended in any way (for instance adding more players). Open games revert this point of view showing how equilibria can indeed be inferred from the sub-games the game is composed of.

This field of research is very young and, albeit it is developing incredibly quickly, many problems have still to be solved. For instance, the mathematical machinery developed up to now is still not ready to tackle the issue of repeated games in a satisfactory way.

The Statebox team thinks that open games will become incredibly valuable for (crypto)-economic analysis in the future, since their inherent compositionality allows them to scale whereas classical game theory becomes unusable for games beyond a certain level of complexity, which is too low to model any meaningful real scenario. For the same reasons, games could be useful for protocol analysis.

The relationship between Statebox and open games is still not clear, but it is for sure worth investigating. We will mainly focus on the following links:

- Open games and cryptoeconomics. Is it possible to describe classical themes in cryptoeconomic using open games? To be more precise, if some cryptoeconomic behavior (such as exchanging currency) can be described in terms of Petri nets, are we able to put open games “on top of them” to obtain an high-level optimization engine? In this respect, open games could provide the best firing strategies on a net to minimize/maximize some utility function representing something we care about (i.e. transaction fees).
- Open games and protocols. Can open games be used to analyze the correctness of consensus protocols? This is heavily linked with the activities of the Consensus group, and it should not be surprising that sheaves of games are already a thing (ask Jules Hedges about this!)! Many consensus protocols – such as [3] – are, indeed, analyzed game-theoretically in contemporary research, but we find the game-theoretic tools employed pretty basic and unsatisfying. Open games could, again, give a wider insight on consensus, allowing us to cast general properties to describe such protocols.
- Complexity and implementability: At the moment, open games are really inefficient when it comes to implementations [4]. This is clearly a huge downside, since if compositionality allows for scalability theoretically, this is still not true in practice. We would like to leverage the insights of all the professionals in programming and complexity invited to the workshop to better understand how such a problem could be solved.

**Useful reads:**[1] Ghani, N., Hedges, J., Winschel, V., & Zahn, P. (2016). Compositional Game Theory. ArXiv:1603.04641 [Cs].

[2] Fundenberg, D., & Tirole, J. (1991). Game Theory. Boston, MA: MIT Press.

[3] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System, 1-9.

[4] Papadimitriou, C. H. (1994). On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence. Journal of Computer and System Sciences, 48(3), 498–532.

^ collapse section ^

#### + Typedefs, Sept. 12 to Sept. 14

Translations between typed terms and bitstrings are a big field of research in computer science, with huge practical applications. Current standards for this include JSON, XML, Google’s protocol buffers, Facebook’s Thrift, etc. In general, the need to translate typed terms back and forth to bitstrings – or to serialize them, as it is usually said – comes from the need to communicate such terms over some sort of message channel.

All the standards mentioned above, though, are not categorically framed, and this results in the paradoxical situation in which a serialization method for typed terms may not be reliable with respect to the terms underlying type theory.

Clearly, this lack of theoretical ground becomes problematic in applications that rely heavily on type theory, such as functional programming and the language Statebox is developing.

We are set on fixing this: Typedefs is a categorical-based serialization protocol for types. We need it to be as much universal as possible, to ensure seamless communication between a general spectrum of different type theories and programming. For this reason, the core of typedefs is pretty simple, and focuses on types obtained from the constructors 0, 1, sums and products between them, and a controlled form of recursion.

In technical terms, we started describing Typedefs in terms of initial F-Algebras [1], which results in the ability to formalize only finite data structures. Our F-Algebras are defined for endofunctors on the category of sets formed only by finite products and coproducts of initial and terminal objects. Incredibly, this is enough to express common data structures such as booleans, natural numbers, lists, trees and so on. The actual existence of such inital algebras and their computation relies heavily on Adamek’s [2] and Lambek’s [3] theorems, as any type theorist would legitimately guess. What is left to do is internalizing the naming conventions and serialization procedure itself to ensure ultimate consistency, and to implement it. A prototypical implementation of Typedefs in Idris [4] is already being worked out.

The goal of this research group will be:

[1] Pierce, B. C. (1991). Basic Category Theory for Computer Scientists. Cambridge, Massachusetts: The MIT Press.

[2] Adámek, J. (1974). Free Algebras and Automata Realizations in the Language of Categories. Commentationes Mathematicae Universitatis Carolinae, 15(4), 589602.

[3] Lambek, J. (1968). A Fixpoint Theorem for Complete Categories. Mathematische Zeitschrift, 103(2), 151161.

[4] Brady, E. (2013). Idris, a General-Purpose Dependently Typed Programming Language: Design and Implementation. Journal of Functional Programming, 23(05), 552593.

All the standards mentioned above, though, are not categorically framed, and this results in the paradoxical situation in which a serialization method for typed terms may not be reliable with respect to the terms underlying type theory.

Clearly, this lack of theoretical ground becomes problematic in applications that rely heavily on type theory, such as functional programming and the language Statebox is developing.

We are set on fixing this: Typedefs is a categorical-based serialization protocol for types. We need it to be as much universal as possible, to ensure seamless communication between a general spectrum of different type theories and programming. For this reason, the core of typedefs is pretty simple, and focuses on types obtained from the constructors 0, 1, sums and products between them, and a controlled form of recursion.

In technical terms, we started describing Typedefs in terms of initial F-Algebras [1], which results in the ability to formalize only finite data structures. Our F-Algebras are defined for endofunctors on the category of sets formed only by finite products and coproducts of initial and terminal objects. Incredibly, this is enough to express common data structures such as booleans, natural numbers, lists, trees and so on. The actual existence of such inital algebras and their computation relies heavily on Adamek’s [2] and Lambek’s [3] theorems, as any type theorist would legitimately guess. What is left to do is internalizing the naming conventions and serialization procedure itself to ensure ultimate consistency, and to implement it. A prototypical implementation of Typedefs in Idris [4] is already being worked out.

The goal of this research group will be:

- Work out the categorical formalization of serialization, and studying how this relates to hashing algorithms.
- Focus on the current Idris implementation and its efficiency.
- Starting to implement Typedefs in other languages.

**Useful Reads:**[1] Pierce, B. C. (1991). Basic Category Theory for Computer Scientists. Cambridge, Massachusetts: The MIT Press.

[2] Adámek, J. (1974). Free Algebras and Automata Realizations in the Language of Categories. Commentationes Mathematicae Universitatis Carolinae, 15(4), 589602.

[3] Lambek, J. (1968). A Fixpoint Theorem for Complete Categories. Mathematische Zeitschrift, 103(2), 151161.

[4] Brady, E. (2013). Idris, a General-Purpose Dependently Typed Programming Language: Design and Implementation. Journal of Functional Programming, 23(05), 552593.

^ collapse section ^

### Structure

As in the previous workshop held in Zlarin, Croatia, participants will be completely free to organize themselves and cooperate as they wish, and the four topics of research highlighted above should be considered more as guidelines than as constraints.There will also be some invited talks, that will be live streamed (links will be provided on this page). Please note that the number of talks will be kept to a bare minimum, seeing that the workshop is mainly research-oriented.

### Location

The village of Schoorl lies at the foot of the Schoorl dunes: the highest, widest and most diverse dune area of the Dutch coast. Throughout the year, it is a lovely walking and cycling area. Schoorl has 3 climbing dunes. Cycling through the dunes to Schoorl aan Zee (where no cars are allowed) is an experience in itself.### Food

Breakfast and dinner will be provided at the accommodation.## Meetup (September 15th)

**List of invited speakers [forthcoming]**

**List of talks and live stream [forthcoming]**

The main purpose of the Meetup is to bring together people from academia, business and hacker culture. We feel that these communities, albeit often separated, have much to give to each other. The Meetup will be heavily focused on cryptoeconomics, blockchain tech and how these topics can help us to make the world into a better place (fixing environmental problems, making things more efficient, implementing sharing economies, …).

Again, all the relevant details will follow in the upcoming weeks.

#### Subscribe to our email list to stay up to date!

Thank you for subscribing!