00:00 to 1:34:26
hi everybody can everyone hear me okay let's see what we got all right chat if you guys can hear me say yes anyone in chat because I can't see the chat at the moment she makes an am a somewhat problematic ah there we go okay we're good we I'm in the mountains of Switzerland right now I'm actually over in st. Maurice I'm here for a conference I just flew in from Zurich I went from London to Zurich and took a three hour train ride to come on down and I enjoy this nice conference this is one of the few places of Switzerland where you're found to hear more Italian than you are French or German I just got back from a major summit in Berlin not too long ago and I wanted to make a video recap of some Shelley related stuff and some news about that workshop that we did internally I and also some releases upcoming and then I wanted to open up to the floor and get some questions from you guys so so anyway hopefully this won't be too long I know I'm notorious for long things but this 1131 in at night and I've been traveling so I'm a little tired but we'll get through it okay so first on the list is kind of a recap of the workshop that we did in Berlin so first there was a bit of a misunderstanding of what this workshop was about it was not a public event we receive apparently some criticism over Twitter where people are saying oh well we saw pictures that you guys took and it wasn't not many people showed up well 100 percent of the people there were either were partners of our company like Omer go people or work for me so no more no less the neccesary showed up the purpose of the summit was to discuss the formal specifications that we've been writing for Shelley in particular to formal specifications one is the delegation specification and then the other specification is the formal legend specification so generally when you have a cryptocurrency the rules of how that ledger works are incumbent and implicit within the reference implementation so for example if you're in UT exosystem this notion that the amount going and the inputs has to match the stuff going in the outputs and if there's a mismatch you're either creating new coins or you're giving mining fees that's a notion of something that a legend rule would cover and so you can extract legend rules from an implementation like it's a perfectly legitimate thing to do to read the Bitcoin client and say okay these are the legend rules of Bitcoin or these are the legend rules of Manero so it's kind of the rules of the game it's almost like watching some people play poker and then after you watch them play for a few rounds kind of inferring well I think I know how pokers played by watching them the other way to go around it is the write the instruction manual for the game and because of the nature of what we're doing that's probably the better route because there could be some nuances missed if you're just covering a particular implementation so anyway we wrote a formal ledger specification and fully describe how the Cardinal legend works right now so that's the old Ledger's back and we are describing how the ledger needs to work for Shelley covering all the features that Shelley's gonna have like multi asset accounting so when you issue assets how can you track multiple assets at the same time extensions do TXO models so that we can actually implement Plutus and Marlowe into the system amongst litany of other little features like for example the notion of preservation of value where you can always track units of a de augmented a demented ADA ADA and the Treasury ADA that's due to be paid for rewards so you can read property based tests against that so that was one major document that we covered and it was really important to go through that document for completeness but then also for accuracy and then finally for simplicity it's a really good idea when you write a specification to try to simplify it as much as you can remove unnecessary rules or get rid of redundancies and so forth and the only way to do that is what's called with what's called a red team black team model so basically the concept is one team the Black team will write it and then one thing the red team will read it criticize it try to find problems with it and be the adversary and challenge every design assumption so the workshop basically gave us the opportunity to do that people from Iowa case science were there Peter gaji was representing him and then we also had engineers from Haskell team and engineers from the rust team as well as engineers from Amer go in particular Sebastian and Nico because they're actively maintaining your ROI and they're gonna need to understand where the legend rules are going for Shelly to ensure that your ROI has you know interoperability okay so the other specification that we're recovering the one that's probably more meaningful to the vast majority of you is the delegation specification and basically that describes how staking is going to work under the HUD when we first design Cardno back in 2016 and we were building it throughout 2017 with our initial engineering effort we kind of had this idea that delegation would be something that could be done iteratively where we would have a very simple easy design that almost looked like certificates like web certificates the X dot 509 standard and you just post them on a blockchain and with some modification everything we just kind of work its way through this kind of showed our naivety and it was something that caused quite a bit delays for us historically because the moment that we started getting deeper into that concern we realized that it's probably the richest and most significant component of a proof of stake system you know it's not good enough just to define what is a ledger and what is an epoch and how slot leader elections work and where randomness is going to come from and prove that you have nice security properties it's much more meaningful to talk about how do you move allocation of control from the base state to a different state and do that in a way that is user friendly and do that in a way on the types of devices that people are going to want to use from cold devices to mobile devices to other actors in the ecosystem you have to have an opinion like for example exchanges when you hand a stackable currency to an exchange because they control the private keys implicitly the exchange is actually controlling the rights behind that say and you have to build special accommodations if you want to exclude them from that so throughout all of 2018 we were working very diligently at trying to parse down all the tradeoff profiles and consequences of these business requirements and about the middle part of that year we eventually converge to a design we were happy with the problem is that you have to build it and to build it you need a spec so we handed that design document to the specifications team at the formal methods group and Phil caught Jared Lars Boreas Duncan Koontz and others spent a considerable amount of time generating the delegation specification it took a long time to get the spec to a readable complete state and the workshop also acted in that Red Team Black team model were for the first time the engineering teams were able to go through it in incredible detail excruciating detail I and that out the specification to a point where we actually now know all the nuances of it to a point we can build it okay now the advantage of this specification process is that the spec while written and math is very close to canonical representation in Haskell meaning that we can go very easily from math to kind of a lightweight reference version of the software and we've actually done that for most of the specifications both the formal spec if you look at the github formal Reap spec repo that we have at the eye which can't get up you can see that and we've also done this for a delegation now the advantage of this set up is that by having a reference code not only do we have an opinion on how it should be constructed a very rigorous way of writing down those instructions and that's useful for testing you now have a reference point that you can a/b test against the commercial implementation to verify that the commercial implementation is by similar to these specifications this is the highest standard of engineering it's very difficult to do this it's done in aerospace engineering still a lot when you're building satellites or the Mars rover in for good reason because these systems once deployed are to operate autonomously or semi-autonomous Li for long periods of time and if there's a bug you can't simply go retrieve them and fix them they just go offline and you've lost the probe or lost the satellite and that can cost billions of dollars so you need assurance that there's a very high probability it's going to work the first time when you flip on the switch furthermore because of the nature of these protocols their cryptographic protocols they do require security auditing once they've been implemented and it's extremely important that the auditor fully understands the intentions of how the system was to be constructed in addition to how the system is constructed it allows them to get a much richer deeper audit and provides a much higher degree of assurance that you've correctly implemented the system according to your intentions not just that the code itself doesn't have obvious defects like side-channel attacks or so forth so anyway that that was a big milestone for us it's the first time as a company that we've done it we did it in a smaller group internal to the wallet back-end team with UT EXO specification but we've never done it on a company-wide scale for multiple teams including external vendors and we're really excited that we will bring everybody together and the conversations were incredibly deep but what was really really reassuring about them is that there were only minor changes that need to be made to both the ledger specification and to the delegation specification so that's that's just a great thing whenever you walk into something and you have a professional team of critics and a lot of them are really smart people with PhDs and they're they're pretty aggressive when they're they're asked to be a proper peer reviewer and they give it their all and they don't find too many issues so a natural question asked is well where are we going to go from here and how do we get to Shelly what's the bridge to Shelly so we're retiring Byron in fact 1.5 will be the end of life update to Byron and it's both of Shelly update so it's the very first Shelly update and it's the very last Byron update so excluding the hotfix for shipping soon what for point one which collects corrects a few of the lingering issues from 1.4 1.5 the users won't immediately notice any difference there are no material changes being made to the wallet back-end and no material changes being made any features so you'll install it but nothing will happen it'll just be like your normal wallet but on the hood what's happened is we've stripped out the classic or Boris implementation and we've replaced that with or Boris B of T now this is the first step in the road to Shelly how Shelly is going to work is there's going to be a series of iterative updates that will first put core infrastructure in place and then it will put a basically a a process into place that will gradually decentralize the network epic by epic so the first infrastructure pieces or Boris B of T the next infrastructure piece is going to be one point six which will ship the decoupled wallet and so if you look at our repos right now you'll see a repo that says Cardinal wallet and we're almost done with this what's gonna happen is we've taken the curtain auto wallet that we currently have which was coupled with the back end card oh my cell repo and we poured it out and made it its own standalone software product that can over a service connect to the core now as an end user experience nothing should change you won't actually notice anything with that but that's actually a very powerful extraction this modular ization means that we can now connect that wallet back-end to different code bases to different implementations we can connect it to your McGann which is the rust client that's the name for the rest client code name for it you haven't got a product name for it yet but rust card on oh and we also can connect it to the new Haskell code so one point six will be about getting us Ikaros style addresses and getting us the decoupled wallet now the next update will be taking the rust client that we've constructed and taking the Haskell client with the new code we if you look at our repos card on chain and card on Oh network which will replace the deprecated Biron based card on OS l throwing away SL and putting in card on a chain and card on a network into that wallet back-end now what this basically means is and once that's done 100% of the legacy code will have been basically thrown away completely replaced with all the new code which has been designed with a specification driven effort the other thing is that these legends that were implementing we're implementing them and almost like to let your DSL Ledger's back it's it's easy to add and take away and it's very easy for us to iterate up to any design we want so once this new code base is installed is going to be a fairly straightforward process to put on all the shellye related logic where it's heaviest is on the address side instead of having just a single address for spending account now you have this idea of having to track both a spending key and a staking key and you can actually segregate these things for example in the case of cold staking you'll be able to take a spending key put that spending key on a ledger device use the legend device to generate a transaction to create a staking key and then you can import that staking key into Daedalus so Daedalus can have the public key of that spending key so it can track the value behind it and it can have the right to delegate but it doesn't have the right to spend this is what allows you to achieve cold sticky so you're spending key will never leave the hardware device or the paper wallet but there'll be a management infrastructure within Daedalus that allows you to basically manage the staking rights and move them around and if you really want to escalate important those private keys if the device allows you to have the keys leave its device dependent so anyway these are very complicated addresses and in the case of exchanges you actually even need another address we call them enterprise addresses in the specification which says if it's an enterprise address it's like the ADA doesn't actually exist it's not in this distribution and it has to leave the enterprise address to re-enter circulation so if 30% of the ADA for example is sitting in bit tricks and Finance and other places and they're an enterprise addresses it's as if there's 30% less ADA in terms of command and control so in other words exchanges won't participate in staking they won't have control over the network as long as you're sending it to an enterprise address but that has to be cosmetically different and so forth well there's a very rich discussion we're having right now about well what should we do with the dress format and also what should we do with metadata and when should we introduce metadata so careful readers of the white Cardinal paper will recognize that it for three years it's been a goal of ours to associate some notion of metadata with an address and really the first entry point in this is the delegation component because when you register a stake pool there needs to be metadata associated with a stake pool who is the pool there's a notion of a ticker name so you can find them a description of the state pool for example maybe you want to embed a URL so you can pull some data off of a website bringing the Dedalus so you'll be able to actually read more about what is the purpose of this pool and so forth or even simple system level parameters like how much does the pool charge and these types of things but going above and beyond that when you look at arbitrary transactions like Alice and Bob Alice isn't sending money to Bob just because she wants to send money to Bob some cases that's the case but usually there's some sort of intent that's embedded within that transaction like I'm buying something I'm donating money with this understanding of its use I'm pre paying a service like you're mowing my lawn so there's going to be a way at some point to embed intent into the transaction and this can be done with a variety of tools the simplest would be to take the terms of the transaction hash them sign them and include them as an attribute to the transaction from the sending side so that if somebody wanted to show up later on in a court of law or as another venue out-of-band they would be able to prove in a timestamp auditable immutable way that that was the intent of the transaction I meant to donate here's the contract matches that and so forth and you can even create a system where you have interactive protocols where not only does it arrive it's signed by both the sender and the receiver this is the magic of Marlo and Plutus so a lot of these things involve some degree of complexity around the richness of these semantics of the scripting language but as well as the address structure itself but how much data can we store in there what types of attributes can touch the addresses so one of the reasons Shelley took so long to get us to that point is that we realized that we needed to abstract our design process in a way that things were very modular and it was easy to hook things on the price you pay for that is that you have a bit of complexity creep and the price you pay for that is that by reducing the simplicity of the system you have more potential attack vectors in the system so you have to be in a very slow methodical systematic process to get these things through so anyway one point six is about the decoupling one point seven will be about verifying that both the rust and the Pascal new code works with the system and then the next upgrade will simply be putting in all the delegation mechanics and other Shelley related features like a mass update to Daedalus which includes the delegation center and it includes the the registration process and all these other things so that you can actually begin staking now how do we build up stake pools well there's a parallel effort on the rest side of trying to get basically a fair representation of the Shelley specification done as quickly as possible so that we can have a command-line client to begin the build up of stating pools I mentioned earlier that we have this concept of gradual to centralization so there's two sides to that one you have to have an opportunity for people who want to run stake pools to have a fair shot at getting them somewhere learning how the software works getting their docker image or whatever their container is set up getting figuring out how to deploy it on a host or self boost get that stuff done so there has to be a burnin time for that second you have to have some time for that to propagate throughout the Cardinal ecosystem people have to market they have to brand they have to go and make pitches about why are they great and so forth so our hope is to get the Russ client sometime in February to a point where we can launch a test net where that process can start so basically what occurs you'll have two parallel efforts going on with basically two separate teams one is about getting that core infrastructure constructed verified and there going to be a series of gradual updates starting with or borås BFT decoupling and then verifying the new back ends work and then the other effort will be getting that hook set up for expert users who want to run stake pools to actually start doing that getting used to that understanding that and then upgrading and iterating that to a point where it now perfectly replicates the user experience that we anticipate with Shelly then there'll be a merger between the two our hope is 1.8 and that merger between the two will be when staking will go live in the system is because as a new protocol no one's ever done this before the history of the cryptocurrencies space you know it would be pretty crazy just flip a switch and say oh well you know congratulations here you go we need to have a gradual to centralization but this can be expedited the easiest way of doing that is recognizing that we already have a pretty cool unit of time for a system that's an epoch and that currently is twenty thousand six hundred slots so what we can do is take the or Bohr's Beauty Federation and say okay instead of making one hundred percent of the slots at time equals zero epoch equals zero shelling it's only gonna make 90 percent for example and then next step book it will only make 80 percent of those slots and then next eight but it only makes 70 and it'll just keep winnowing its way down until it gets to a critical threshold and then the entire network all the slots are made by the state pools this creates a nice control valve in the system in case something we didn't anticipate or major error occurs or some sort of catastrophic failure happens or there is like a small coordinated malicious minority that's trying to do something weird you kind of have a mechanism built in to gradually work our way up the other thing is that it gives us a great way of also giving people who need a little bit more time to get into the stake in getting a fair shot that it's probably going to be a distribution where the expert users who are really passionate Vault are going to get in first and then less experienced users who come in a little bit later would say oh or I want to play too and if we try to get everything in all at once and all slots mate at once it's likely that we'll see an oligarchy form so so anyway that's that's kind of the gradual plan so we'll write a dedicated blog post as we get a little closer to it about exactly what those parameters will be whether it's 5% or 10% or even more of the slots per EVOC its each epoch is only five days so even if it was 5% it would only take a few months if it was 10% of be even faster but the other important thing to do is to always say that the PFT side doesn't get any rewards so we make nothing running the network just a temporary measure to ensure security is high and all the rewards go to the actual slot producers so that's kind of the road we start with or worse BFT that's the protocol that will be used all throughout the life of shelley to get us to shelly to copy a bunch of wallet specific features excuse me and then from decoupling moving on to verifying that the new rust back-end and Haskell back-end are working correctly and then putting in the final delegation features for both Daedalus as well as Cardona itself and then at that point it'll be pretty straightforward to delegate GUI scamp's of the delegation Center we reviewed them at the summit as well at the workshop as well and we'll have Darko or Nicola or someone on that side write a blog post and show off some of the gooey UX and UI that were you have so you guys will get a pretty good idea what the delegation Center is going to to look like so we're pretty excited about the designs we think they're very forward-thinking and they're well-structured and they're also extensible so we can add new cool things to that so anyway that's that's the whole Shelley story it's a gradual process and now we're beginning that process this month and all throughout q1 things will be turning on and then eventually you'll see actual staking occuring with stake pools and more and more and more of those slots will be produced by state pools and then suddenly 100% of the network is done by state pools and congratulations were completely decentralized and were decentralized in a responsible sustainable systematic way and we're also decentralized with the way that allowed everybody to participate because they had a bit of time to think about well how to run this stuff how to test this stuff how to get these things going so that's the the best I think anybody could do given the nature of what we're trying to accomplish it's important to understand that this is not a fork of existing software this is completely new software these are not a modified protocols these are completely new protocols there have been over 40 papers written for Cardinal more than 20 of which have gone through peer review so about half of them and more are going through peer review at major conferences and this is original science so we have the dual disadvantage of being the first mover with new ideas which always means there's going to be a suboptimal result and the dual disadvantage of not having a reference code base to look at had we forked Bitcoin or aetherium we would have all the lessons that they've learned over the 10 and 4 years respectively to leverage from in this case we're doing it basically all on our own and despite that in about a year's time we were able to get a product and market and get over a hundred thousand users build multiple implementations of that product get it listed on more than 25 exchanges with a huge list of ideas about where to go not have a major security event knock on wood develop an auditing process a QA process a helpdesk process which is answered I think more now more than 50 thousand tickets and be able to move completely to specification at your development for the core components of the system and those specs will be publicly available soon as with all things they have to go through a little bit of a release process just to make sure they're all cleaned up and make sure the language is polished and we didn't leave any comments or other things that would confuse people that were just there for the workshop for example and now and then it you guys can be our red team in addition to the one we had in the workshop and verify that those things are okay now there's also another side of things which is the feature richness where are we going to go because Shelley's exciting but by no means is Shelley the end of the road Shelley's just another major milestone and getting us where we need to go well Jared the principal architect of these specifications will be throughout this month and the next updating the Ledger's back in the UT EXO spec to reflect the reality that we're deviating from normal UT EXO to what's called extended UT EXO this is a requirement for Plutus and Marlowe to work in our system so we kind of have this beautiful team that's working on something called a mock chain so it's kind of a fake blockchain that they constructed to test out verify that they're smart contract language works but at some point we have to run Plutus on a real system so the Plutus team is coming to a 1.0 milestone Plutus fest basically showed off polluters to the world to the general Piell community and we've gotten great feedback from that now people using Pluto's playground are actually able to build things and we recently even saw a course launched on how to use Plutus by members of the community so that was a huge milestone a great success for us but they're all there needs to be a kind of a methodical movement of Plutus into our system and that's going to be done by updating the specification to reflect that with an extended UT EXO model the advantage of the approach that we've taken is it turns out that this extension is rather mild compared to delegation so it can be quickly added into Leger post Shelly and it's not something of that scale or complexity most of the scale and complexity was in the language design and it's a really a four layer model so you have Marlowe Plutus Haskell and then there's infrastructure that that Haskell code needs to run on and we've been getting scaling up resources on both web assembly and JavaScript for gjs to be able to take Haskell code and convert it into JavaScript take Haskell code converted into web assembly meaning that you'll be able to actually run the client side of your smart contracts in a containerized environment whether that be with a node or be within Chrome or Firefox for example for web applications or for known packages so this pibil are a gap model because basically it says if you're dealing with domain experts but are not experts in programming almost kind of like the Excel model you know you're talking about accountants and people like spreadsheets but they're not really hardcore programmers give them a logic that allows them to solve that set of problems if they're programmers but they want to live within a certain environment that's what Plutus is all about but when you go outside of that environment and go to a more general environment either directly on the metal of the system or within some sort of VM like v8 or web assembly then give them the Haskell side and then gh ejs and webassembly side so we can adequately cover all parts of that triangle the server the client and the blockchain this is the Plutus paradigm now getting that into Cardinal is not very hard because that environments been custom-built for it the hard part is deciding what that's going to look like and we spent two years of research with the best people in the field like Phil Wadler and Moe Gilardi and Simon Thompson figuring that out so we're super excited about that the other thing is multi-asset you may have noticed that we have this design called time Eric Ledger's and we've actually implemented it in Scala already and it's in our Enterprise framework that we've been building and running pilots with and reporting that design over to Cardinal the first instantiation of it will be for state pool rewards unlike Bitcoin when you set up a mining pool at Bitcoin you have to trust the pool operator to pay you we card on oh we can guarantee that you're automatically paid without trusting the pool operator and to make that work well without a lot of dust transactions it's actually useful to use an aetherium style account model in this particular case rather than aut excel model because of the chimeric Ledger's work you can easily move between these two accounting models and guarantee that money is preserved no money is lost so that's our first instantiation an extension of that would be user issued assets for native assets and over the next two months that's what Jared's going to be working on with his team and extending our specs so that we have that and then we can easily roll those into the cardinal chain and anticipation for launch of Gogan on the wallet back-end team after we have the Icarus style addresses the next thing in our pipeline is Hardware wallet support and multi say alongside all of the API enhancements that need to be done for for Cardinal for Shelly the v1 API said is sufficient now for exchanges for basically the way we've built Cardinal but there needs to be specific API s that are relevant to staking mechanics so that API extension will be the last arrest API update that we likely do maybe there'll be a few more that need to be in for certain things but it's not we're not going to stay on that model for long once we're done there some subset of the wallet back-end team will begin redoing the backend to include graph QL this is part of a broader model that we have for our application architecture for Daedalus as well as having a more general way of assessing api's and then that way exchanges can have much more sophisticated granular queries and also when we bring in the terminal into Cardinal means that you can have much more sophisticated queries well interfacing with those wallet backends we're also going to be moving the graph QL four all of the Iowa GA product lines so we're moving mantas in that direction eventually we'll move the rest client in that direction and we'll do that with a Haskell client as well graph QL is real powerful it can do a lot it's a very resilient robust system that Facebook designed it's open source the only downside is because it has much more power and complexity you have to be a little bit careful about how you integrate it in particular in Haskell there's going to be some friction and getting that in so one part of the team will be involved in that in a parallel effort another part will be sticking with Russ arrest and updating the ev1 API is to an extended set so that we can cover all the staking related features and things necessary for Plutus and then going beyond that the team will also be working on getting a ledger support the multi account settings so that you can manage public keys so like your paper wallets and your ledger devices the delegation Center these types of mechanics as well as our plans for multi-sig inside the system both for just naive multisig accounts but also multisig accounts that can be used for stake rights because it gets a little hairy when you say well I'm moving young one user controlling stick rights to multiple users controlling stick rights how granular do you want to set those permission is it just a majority system or a supermajority what does that exactly look like so so that's what that team is going to be doing I'd say for the first six months of this year this is the decoupling incra style addresses multi-sig terminal support Ledger that's a huge task look for the engineers there and they're real motivated and they're working really fast and that the copling allows them to work even faster and then the next six months we're going to be just enriching the terminal support in preparation for Gogan because you'll be able to deploy smart contracts from the command line in the wallet we need to be able to make sure that there's a good experience there we learned ourselves and we learned a lot from mallet the minimum wallet interface that we created for Kurt on LCL and we just need to keep pushing that forward in addition to that that team will also be working on our side chain support because this will give us coincidentally a phenomenal like chronic experience I'd argue and probably the best like client experience in the entire cryptocurrency space and for good reason so we have these things called TMS a--'s threshold multi-sig it's a primitive that we created in the proof of stake paper other side chains proof of stake paper which we we got accepted to Oakland which is a major cryptographic conference that I Triple E puts on it's actually one of the hardest conferences to get into the original Rivera's paper got rejected and we went to crypto instead so it's really exciting that the sidechains paper is that much better and the the sidechains paper gives us the ability to have a discussion around creation of these special representations that allow you to verify big chunks of history without having the history this is really useful for side-chains because you need some sort of way of knowing that when you receive a foreign asset that asset has not been double spent and then acid actually exists but along the same token you can look at a light clientís treating its own asset as a foreign client in other words it can verify that when it receives a transaction that's actually part of the UT Excel even though it doesn't have the full blockchain so what we're likely going to be able to do is take these TMS s use the state pools to generate them and embed them in regular places within the chain in the epoch and then take that and the entire u TXO of the system and take the merkel root of the UT x so put that in a block header so basically what you'll be able to do to bootstrap is download the UT x so check it against the hash of the header and then verify that that's actually the correct UT exit through the TMS and just have the Genesis block not the full chain that's a super fast way of bootstrapping it can be done in a matter of minutes if not sooner it just however long it takes to download these assets which are considerably smaller than the entire chain then and as a background process the chain can upgrade its way up to a full node at some point but you'll be able to immediately begin spending as soon as you verified that because you know how much money you have and you have a high trust threshold of that it's also important to point out this is a network level phenomena so it doesn't matter where you get served that UT Excel and it doesn't matter where you could serve these artifacts to the magic of cryptography you'll be able to verify that they're correct with a high degree of certainty meaning that you get basically full node security around but you're only paying the like client price so this is a huge thing for us and it's something that will allow us to have them I think much richer user experience for Dedalus because instead of having to be a full note by default download the entire blockchain and then use card ah no you can just simply boot up instantly use it and you know as time progresses the Cardinal wallet will simply upgrade itself to a full mode but it will not interfere with your user experience so so that's that's a major thing I'd say for the other six months along with the graph QL work will be getting us in that particular direction and I think exiting 2019 this really does make Hirano super-competitive because we'll have this beautiful smart contract model that's extremely well thought-out will be completely decentralized and things will be running quite well we'll have a very rich deep wallet that's super secure it's audited from many different perspectives it'll have a great user experience because you can bootstrap very quickly and basically the network will be quite resilient quite distributed in fact if you look at where we're taking the staking mechanics we feel that over time we can become fifty to a hundred times more decentralized the the standard deep Haas model or the Bitcoin mining model so it's a Bitcoin basically five to ten pools depending on how you count things control the majority of the hash power in case of AOS is twenty-one delegates and actually it's even worse than that because 90% of the stake appears to be controlled by about one percent of the actors so your Gini coefficient works against you now some people are concerned about fairness and they are concerned about the egalitarian nature of systems and there's an open question of does this rule-bot autocracy and proof of stake yield of worse outcome than the rule by merit and proof of work and so we actually have a paper on this very topic which will be coming out soon I tweeted a the abstract and the author said but we actually create some mathematical modeling and try to get a deeper understanding of this and our conclusion is that if proof of steak is parametrized correctly proof of state can actually be a much fairer system than proof of work apples to apples comparison and if you're curious about why that's the case the paper will actually explain that in pretty good detail so not only what we have this beautiful system and is completely decentralized and very resilient and easy to use with lots of great crypto in it we also will have a system we feel is just simply better than Bitcoin and in every appreciable dimension yet did borrow a lot of great concepts from Bitcoin and refined those concepts like the concept of having a scripting language with very careful thought behind it so that its capabilities don't introduce security flaws and the concept of YouTube so counting and support but moving into the next dimension you know if we are going to have these pools of effort let's make those pools fair and make sure that those pool there can't be one pool that dominates all of them there's an economic disincentive for that and also other concepts like the pools have to pay you if you've delegated to them and so forth and also a system built acknowledging the realities of things for example HD wallets Hardware wallets and so forth these are nuances that didn't exist when Bitcoin was originally designed and they were kind of patched into the system over time and there's a wide spectrum of opinions on maybe what's the best way of handling them with what we've been doing with Cardno is that we're aware of these innovations so we're able to bring them into the protocol level and as a consequence of being a protocol level construct it makes it much much much easier for us to deal with them as well as things that probably will never work their way into Bitcoin example this notion of hash metadata with the transaction that's probably never going to come down to the Bitcoin pipeline in a reasonable way at a protocol level but we will start building those types of supports in okay so one final thing let's talk about the Dedalus DAP platform we had a pretty long conversation about the Daedalus DAP platform and basically that's all about well how will you actually build gaps in the card on eCos and deploy those dabs in a reasonable way with a great user experience for over three years we've been having these conversations and we've actually had a small skunkworks within the company led by a gentleman named Rhys managed by Darko magic the product manager of Daedalus and they've been exploring basically a lightweight model that borrows from a lot of the good things that have happened in the mobile space and they've happened the web space but also accepting the realities that these types of applications control your money and if they go wrong there's pretty bad consequences and also a desire not to have a central curator so the first instantiation of this effort is basically why we chose to use the electronic platform to begin with we could have easily gone down a different direction that was a little lighter weight but we wanted to have node in chromium in our stack a because these are huge projects that are super well maintained and be because all of the millions and millions of web developers who understand how to write software for these systems or have written software for these systems would instantly be able to apply all their tools domain knowledge and their capabilities into these systems and there's a pretty reasonable security model around them and in fact in some cases the security model is superior than the model we're seeing on the metal for example the webassembly CT effort for cryptographic implementations it's just wonderful work that's recently been done so we chose that model and basically how generation 1 DAP development is going to work with the Plutus side is that you'll write your Marlo and Plutus code for all of the logic that will go on the blockchain then you'll write something like Haskell for the logic that's going to go on the client side or you could write that JavaScript then there'll be a way to go from that Haskell into G has or into webassembly and then you can package it all together into a node package or something like that and then bundle this is a single logical unit that unit along with an application manifest all that metadata can be hashed and signed by your key and then there'll be a registration process to take that unit that's been hashed and that metadata portfolio so the author information the version information the fidelity hash of the thing as well as a distribution information either decentralize hurt or it or your servers wherever you choose to host them all of that is a little stub and that stub can be put up on the Cardinal blockchain so then what the Daedalus app store can do is look at that stub and pull that information right into a user experience like Google almost like Google Play or the iOS app store but the difference is this is a permissionless system so basically anybody can deploy these things there's no curator that is inside the store ordering will be handled in a fair way and basically there's no de platforming or censorship and this type of a setup the other thing is that you have a lot of really nice features you have time stamping you have a good PK system as a result you have a nice identity system that maps into this it's easy to do automated updates because basically what you can do is just update that stub to a new version Daedalus can always check all the stubs and if there's a new version it says oh you have an update and then you can also build a nice permissioning system and hear about in our app communication one of the reasons we're moving the graph QL is we're going to use graph QL for the interrupt communication model that we have so over time it'll be very easy for the consumer to get adapt they just installed Daedalus and at that point card on a wallet will be decoupled from Daedalus so it means that when you install it you can install it as a package into a system as opposed to this heavy infrastructure that we have and then you'll have an app store built in and it's just one click install to be able to get your app now you see all that crypto kitty's thing looks really cool you click that you get all the front-end code to interface with the DAP and if you have a wallet you can send funds to it and use it the way it's intended to be you don't have all those transitive dependencies will be built there and then there's a nice reputation system that we can put on top so community curation can happen so you could ask well is this a high quality app is a organization I trust like a mergo or ihk for example foundation attested that this application is legitimate it doesn't have any stuff in it that could be malicious or it's just been Mis design well if it's a lottery application for example is it a fair lottery all the Politis work we've done allows you to write correct applications meaning that when you have the intention to do that you can write it and get a high assurance that a doubt bug is not going to happen or the parity multi-sig is not going to happen but it's still up to the developer to write what they want and it's still up to the developer to decide the rules of the game and it's entirely reasonable that people could deploy things that are not fair and the only way to get over that is a curation component meaning that somebody you trust has read the code looked at it verified that this application looks fairly reasonable okay so that's kind of the 1.0 model and all throughout 2019 especially the second half of the year we're going to be moving in that direction it's a very nice model because we're basically saying deploy the stuff in environments constructed by some of the largest most profitable companies in the world with hundreds of companies supporting them in a very federated way it's based on web standards and so hey you know if millions of developers are happy with it if big multinational companies are using these things it's a good model to step up with and then our innovation is going to be in the coordination and the organization as well as the domain-specific languages to heal that blockchain logic for the more then we can head on service layers to the system so that the delegates the stake pools can double up not only just approving transactions but then can run service networks like random number generation data feeds and other things and offer these and kind of like a service-oriented architecture to DAP developers and then you can just say hey that infrastructure is there there's a toll for using it and if an app chooses to use it you have kind of a sustainable payment model so the state pool will go beyond just what are they charging to state and look at more of a service provider for decentralized services to the ecosystem and then people can consume them as they want them okay so that's basically that's basically where we're at we're moving real fast we're real happy about the speed of things now was really nice as we're moving from super high technical debt code that really wasn't super well written to code that doesn't have any technical debt that's based on specs and it's exactly what we want we know how to write specs we've demonstrated that we've taken a spec to market about half our codes been swapped out to the new wallet back-end and we have a really good strategy for how to get terminal support multi-sig hardware support how to get smart contracts into the system asset issues in the system what that def development model is going to look like and then by extension getting interoperability with the third virtual machine and web assembly and other standards that are emerging in the marketplace but while preserving a safe core to the system to ensure that the things that we use the base layer for are fairly well thought out well design and Shelley is a spectrum in we're right now in the beginning of it or start and roll it out the fact that we've gotten here is a humongous milestone and it's just it's really cool to see all this stuff come together and I'm really really happy if that we've gotten this far building a cryptocurrency is a humongous endeavor the vast majority of people who do it they fail and the vast majority of people who do it they half-ass it they copy someone else's work or even if they invent something new what they generally do is they cheat on the things they don't know how to do if they don't know how to write a consensus algorithm they borrow someone else's or let a legacy system like EFT and they say I've been invaded and then what they do is that they don't think about a holistic system so they don't realize that design decisions you make in one part of your system if they're not correct or if they're suboptimal won't just be contained within that part of your system it will cascade throughout your entire currency and inevitably lead to a death spiral once people take advantage of them as we're starting to see with some of the incumbents in the altcoin space so so as a long update sorry yeah sorry it took so long but anyway I'd like to get back to your questions now let's see what we got there's probably a quite a bit of them and since it's 1221 on here I I do have to go to bed soon so I'm not gonna yeah they're too many questions so Nostradamus asks satellites taking pulls a dream or is it a possibility community members could buy part of a satellite that will decentralize even more so if you look at a logical breakdown of a cryptocurrency you have the notion of a scripting language you have a notion of a data layer there's this idea of how does that ledger achieve a single source of truth that people can agree on and then there's this concept of well how do I move things around so you know one thing is what can you do with it how do you store it how do you agree that what's been done is right and then how do you inform people that you've done that how do you move the data around it's kind of a nice logical breakdown and there's other ways to do it but you'll find that most of the utilities of these systems fall in one of those categories simple scripting collect smart contracts proof of stake is within the consensus side so this question is around the network side of the system which is there is a concern that these systems are brittle because while we can build the perfect consensus protocol and the super secure scripting language and we can have an idealized way of storing the data so perfect blockchain if we can't safely use the client the roads that the data runs on the pipes of the data go through those are in some way contaminated to either censor it or track it then we really haven't accomplished very much it's like saying well if the only way to use a cryptocurrency used to have to go through a centralized exchange and go through kyc ml you're really never going to have a notion of a privacy coin because you know you can be private on the transaction level but they know when you've exited an enter they can kind of fill in the rest of the history in between so there's been a lot of discussions about alternative ways of transmitting data within your system and one of those that block stream introduced but by no means were they any better of this this has been an idea and the cryptocurrency space for many years I think as early as 2010 was the concept of could we construct a satellite backbone as a relay system if anything just to propagate transactions in blocks to guarantee that they couldn't be censored especially in countries where there it says very aggressive things like the China firewall for example now as with all things as with all technology it's certainly possible to do a lot but there's a cost benefit and so the first question is well when you're talking about a satellite somebody has to launch it somebody has to design it it has two different software it's it's it's hardware so somebody had to build it and satellites have owners so when you have those types of a system intrinsically there are some points of centralization now you can kind of get around this if it's some sort of like federated ownership or it's its kind of infrastructure that lives in the ownership lives in politically neutral jurisdictions like Switzerland then what you've kind of done is said all right well yeah I get that but I've solved some of those problems through creative legal means or through Federation and that's certainly an interesting area to explore another idea would be making the cost of building these satellites deploying these satellites so low that it's conceivable that small communities could get together construct them and then people can interface with them through some sort of like mesh of devices this idea of micro or nano satellites with a pretty long runtime and in a reasonable uptime and low cost and deployment so you know if anything if we could get 10,000 or 15,000 of them I could go to SpaceX and buy out all the cargo and a falcon 9 fill the whole car bubble launch it and then suddenly we have a satellite grid all across the world but you also could explore alternative ways and relating data for example Project loon from Google explored the idea of doing data through balloons so basically semi-permanent bloom and they would be put in strategic locations and they would have functionally served the same role that satellite serves but they would be launched and maintained at several orders of magnitude lower cost now I think with all things and networking the key is to understand that you're not going to have a perfect solution and heterogeneity is your friend in other words you need to have multiple miles multiple roads to run on one of the cables why the Internet's been so successful hard to control hard to censor for us has been that it grew in a very bizarre way in a very organic way in a mostly unregulated way in the beginning days where governments kind of understood it but they didn't fully appreciate it so a lot of things that they would have done well that hardware was being put into place and those protocols were being developed like for example mandating identity be required to log on or things like that they just simply didn't do out of ignorance and as a consequence in a kind of grew up in a way as very resilient the other thing the political realities of the internet was that you didn't really want a single cut a country to control it so by necessity you kind of had to agree to let everybody use it with some fair game now this is being challenged because of the existence of gatekeepers like Amazon and Google and Microsoft and Facebook and other large companies who exert enormous control over all of the assets we tend to enjoy and use and many cases contract you even if you don't directly use their services or vastly reduce your quality of service you don't use the services so I think it's important that we explore mesh nets protocols like open garden and CJ DNS I think it's very important that we explore micro nano satellite ideas and it's very important we explore alternative ideas like this loon idea is pretty cool and in the case of the economics of it the cool part about our system is that these relay systems can actually be smart contract bearing instruments meaning that they have open ownership but to use them you have to go ahead and pay a transaction fee and that fees paid to a contract and then there could be a governance model connected to it where that forms a micro Treasury to maintain that unit and as people are using it it gets more and more money in the Treasury and eventually the device eventually is get towards the end of life or needs to be upgraded device has its own bank account its own money to pay for its upgrade and that governance group can decide how to do that you can come up with all kinds of creative competitive models so I think this is a this is something our space uniquely enables to occur and it's naturally going to occur out of anything just competitive pressure or desire to make more money or just because people get tired of being censored or control that's a pretty good question yeah so I see mr. B says Pat issue here with Daedalus I don't do tech support ama's though I've seen a few people complaining about this I'm not aware of any password issue with Daedalus it would make no sense for that to exist it could have been this a situation where for some reason you had a corrupted state or database or somehow a broken version of card on oh yeah of card on OSL and when you're upgrading from the old card oh no to card on a 1.4 there was an anomaly in edge case that our QA just couldn't reproduce that's very specialized to your system that the password did not get Rhian cryptid properly we strengthened the encryption of the spending passwords between one point three point one and one point four so it is entirely possible that an edge case created some form of a corruption the good news is that it's exceedingly easy to resolve this the bad news is that that's a time consuming process you simply just delete your database you're assuming you have it backed up and you have your keywords you delete your wallet database and there's instructions on the Dedalus wallet website on how to do that and then you restore your wallet and basically what this means is that the entire the system is reconstructed from scratch and when you restore a wallet that's password free because what the password does is it shields an existing private key family but it doesn't live within the restoration process so when you restore you restore to an unshielded private key and then you apply the password for encryption these can be done at the same time but you won't have anything to do with your old password or your old setup so if you're having a password issue do send it to the helpdesk will definitely investigate it because we are interested in fact we thought for a long time because of new wallet backin is a lot more sophisticated in many respects than the old one would there'd be an issue where passwords could potentially be corrupted which would make it difficult for people to spend their money without going through a process and we thought we hunted down all those edge cases but it's very possible for a small set of people that there's just some weird thing that we didn't have a broad enough imagination to see and if that's the case do submit a ticket to the helpdesk and Karl will take a look at it and if you're real frustrated about this you have two options one you can restore your wallet to you Roy and immediately use it from the keywords or you can delete the database and reset the whole thing and restore the wallet and that should solve the issue okay let's see what else we got okay if crypto kitties was ported to car Donna would it be sizably better he has a really interesting question so we get a lot of questions about the scalability of the system and exactly how much GPS do you need and this is a very controversial and it's an often misunderstood topic so my hope for where we can take Shelli is somewhere performance between 50 to 250 TPS so when we worked with integer 32 and getting some pulmonary benchmarks for orb or Sprouse way back in the day this was at 2017 they were asking in a greater than 200 PPS rate so we said that's great this gives us a lot to work with because Bitcoin operates in a much much lower rate in this is about 30 times better than that and for most networks this seems to be it for sustained rates a pretty reasonable load to be in there are certain networks like the Visa or MasterCard Network that need to operate much higher rates like five to ten thousand okay so so he said that's a good starting point and if we could achieve that we'd be over the moon because it's already one of the most reliable fast well balanced systems but then how do we get to those big sexy numbers that people really care a lot about and first thing you have to understand is there's actually a couple of factors that are like a pendulum they're not necessarily you know everybody gets to have their unicorns and they're their money you have to know that there is a trade-off between throughput and latency the faster you want transactions to confirm it's gonna have some impact on the maximum TPS rate of the system the more TPS you have it is going to introduce this notion that settlement this idea that you're confident the transition has cleared is gonna suffer a little bit with in conventional consensus protocol now this is something that people have a listicle II understood for a long time but we did publish a paper about this in a formal way so you can actually really understand that relationship in a mechanistic way a mathematical way it's called parallel chains and parallel chances written by mathias fitzy peter gauzy and other scientists and it's the very first step in the oral boards Hydra research agenda so basically we know all the common charting techniques that parlance has existed for a long time and we know how to run chains in parallel you know that's not rocket science at this point and we know how to manage state between them and we have a really good understanding of this and not only do we know it we also have our own process calculus we created it's called calculus and we have published I think a 70 or 80 paper Wolfgang did that one of our computational logicians that works with us through well typed yeah and knowing these things gives us a pretty good ability to deliver to market a very high throughput protocol so if we want to get to five to ten thousand there's a road to get there but then you have a conversation of well what does my network look like what trust assumptions am i making what am i giving up how much Byzantine resistance is going to dissolve and there's a lot of little mechanics like inter shard communication and so forth that have to be worked out which is why we also have this aggressive side-chains agenda so we're pushing real hard to get parallel chains and side chains to eventually converge together and become or bors Hydra which will be kind of the capstone protocol now the idea there is that there's this concept that as you add more staking pools more delegates more people doing work then instead of it being replicated it's distributed so system will get faster with more users so and then you also have to decide well what a settlement looked like and what throughput our belief is by layering protocols you probably can offset some of these trade-offs so if you're willing to accept some more Federation in the system then you can get superfast settlement and pay a special transaction fee for that or you can use the base network get eventual settlement pay a lower transaction fee much higher throughput so it can can process a lot more work okay so that's one dimension that's chain itself and it requires some network innovations and those are underway and it requires some innovations understanding these trade-offs and some tuning so you start with Shelly and 50 to 250 hour range they're Hydra our targets are five to ten thousand we'd like to be in that range now where does that get us if we just get that alone more visa scale you know realistically that's a network that can process a humongous load and most people be pretty happy with it but we're not quite done because there's a lot of nuances in this conversation one nuance is that we're just talking about dumb transactions there are simple scripts when you're doing about programs stateful programs that are running long term processes and they're very heavy the validation time the validation mechanics are more involved and as a consequence that looks more and more like a cluster of transactions as opposed to one the nice part about the way we design Plutus and marlo is that because of these things and the use of the extend et Excel models much easier to shard them and it's much easier to actually understand how to load balance these types of things and so that's less of an issue for us but there's a natural Avenue that the entire industry is starting to move towards with ventures like Stark where and Pinocchio and so forth where people are saying instead of validating entire program just check a some sort of proof that the program was correctly executed and that that is indeed the output and that'll be done off chain and the on chain sell them is just verification of the proof is correct so we think that's going to be the long-term great direction to basically bring these things within constant bounds okay then there's also this idea of using layer two protocols like lightning for example yeah there's just great innovation in that space the RV RGB protocol you know other things that are being put into lightning it's really exciting to see how fertile that that space is and there's great academic papers being written great engineering efforts being written and this is not a game for us of reinventing the wheel the reality is that a lot of the complexity that's being introduced for lightning is because they have an implementation target which is Bitcoin and so they're trying to figure out how do I get this thing that was never designed to work with this thing to work it's almost like you have an old motherboard in the computer and you're just trying to rig it so that you can without upgrading the power supplier changing the processor somehow get your favorite graphics card in who knows if you can make that happen or not it's it's not a system that was built for upgradability in mind and as a consequence it's very difficult to put it in and if they do very carefully and there's a lot of value at risk we have the unique advantage with Carano having kind of a multi-layered system already in that we have state pools so you have this idea concept of these trusted 24/7 nodes that have reputation people have delegated to them and then ways of proving and they are who they are so you have a PKI you have a reputation and you have an expectation of persistent quality so when you move beyond just the service of maintaining the ledger and you actually move to saying hey let's let's actually do other things you can create a layer to network with those pools and you'll notice something we have a thousand of them that's how far we want to go it's an enormous ly decentralized system much more so than in any of these systems that are deployed so what we hazard a guess is that we should be able to take these great innovations that are being built in the Lightning community a lot of them now being formalized somebody promoted and his guys at UIUC summit lighting labs and other places and picked best available bring them in and add them as a value-added service to state pools now what is the consequence of this it means that for certain clusters of transactions micro payments for tipping you know a lot a lot of these things that you'd like to bundle together and process off chain so those clusters of transactions all those can be put into that layer to system because either they're the value at risk is lower or the trust bounds of running them these systems is tolerable and in many cases or even figure out ways to anonymize things when they go into these networks so that the people processing these things might not even fully be aware of their transactions they're processing so that's a very fertile area and we think that that's how you get from ten thousand to a hundred thousand okay so if you look at protocols for example Inc teach-in and of Cornell Mingo and Sears protocol they're in the lab performance was a hundred and sixty three thousand TPS on a production system using SG acts great work easy for us to bring that in when the time comes and layer with other solutions like adding Oracle's and so forth so the agenda for getting great performance is really three pronged one is get Shelley out and then evolve Shelley to Hydra which is really an iteration of the side chains and parallel chains research intersecting each other after we've done some more formal work on workforce to get it to that state and that gets us from the 50 to 250 to our target we hope of 5,000 to 10,000 this presupposes some more networking enhancements and we have a team that works on that and then for complicated programs and eventually all programs try to get those to be represented as zero knowledge proof that they've been executed correctly that's the Pinocchio approach update modernize that make that run off chain so that basically you know it's run correctly but the people who are involved in that are the ones paying those resources and it's not slowing the system down exponentially when you have a crypto kitty's phenomena and then for that really broad stuff run that with a layer two solution and basically that's just a partnership and picking best available and given the fact that we have ability to do be much richer modifications to Cardinal than Bitcoin can it's much simpler for us to absorb that and we have a natural set of trusted people within the system to run these types of services so we have a huge advantage in both of those dimensions it also will help us get interoperability with other systems we have our side chains wrote well Fridays where other protocols like in our ledger by supporting lightning when the centralized exchange is and all these other things going to those types of networks it means all those currencies wired into that system we potentially can be interoperable with so so those bridges that internet a block chains that third pillar interoperability for third generation block chains this is a great way of getting there and it's just really nice to be in the same research pool as MIT and these other guys who really are taking this stuff very seriously and there are millions and millions of dollars being spent on a regular basis and with lots of great engineers not just scientists we're building practical production things and getting these things into circulation so that's your spectrum and those are the three areas in research we have Markoff Cole Weiss leads to zero knowledge side of things he figured out a way to make an org or private if we ever wanted to go down that road it's called or scripts in us I'm Chris eNOS so he's impossible to pronounce all these Greek names and we have him also working on a lot of zero-knowledge stuff related to just these types of things off chain verification programs puff King calculation of programs on chain verification and we have been parsing the Lightning world and we're will be setting up a dedicated group the first half of 2019 to pick best available protocol and pull it into car DOM and that should be relatively straightforward to drag out you you you Craig Wright invented dan Larimer actually a jet it's one of the most awesome things I read let's look for one or two more questions and then I got to get off to sleep because I have a early morning meeting are you patenting no nothing that we do is patented all the work I which k produces under either an MIT or apache license and we are patent free our written and video work is under Creative Commons Attribution license so tell people where you got it from but you're allowed to use it commercially or non commercially and share it with everybody that's one of the value propositions of what we do we're changing the science of the space and is the people's science all those 40 papers you guys can take and use in fact I had dinner today with a Bitcoin maximalist it was Bob and he thinks all these cryptocurrencies are scams but he's a big Bitcoin guy and you know we were talking about a wallet he's constructing for one of his clients and I was mentioning he should use our UT Excel spec and also he should use our input selection policy that s : Ries created because these things could be used in Bitcoin as much as they can be used in cardinal and he's probably gonna take a good look at that and that's really the cool part of this field we can borrow from lightning and other projects when they come up with good ideas but then we can pay it back by creating great ideas for other people to borrow from and we all succeed together you know jiu-jitsu athletes in Africa and Brazil I when I was a kid I I did judo and I'd like to get back into it so I'm not so fat anymore but not a big jiu-jitsu man although a lot of my friends went into MMA fighting and they yeah I want to pretty crumby undergrad Metropolitan State College of Denver they even renamed it to Metropolitan State University of Denver as before I went to see Boulder which is actually a great University she was wonderful but metros kinda it's got pride man and a lot of my fellow classmates who wanted to be doctors as you didn't quite get it they for some reason went into Brazilian Jujitsu and there was like a whole Club of them and they've gotten pretty good because I'm getting old yes is it possible to tokenize intellectual property rights on Cardona that's the notion of an odd fundable token so basically what you would do is you would create a master token that represents a non fungible asset so like a patent and then you have this idea of a vending machine style license issuance so you would have a menu of license options and each of those license options would have an address associated with it with a minimum payment you would send payment to it and if the transaction settles the Machine will then a use token and that use token would then be sent to to say okay you now have the right to use this IP now there are two ways in which enforcing intellectual property rights one can be just simply that idea what we have a two sided metadata you generate a lot of data with the transaction and then you say in the terms of service that it the only person people who are allowed to use this or people who have purchased the associated license in this manner and if there's a violation you would sue somebody through a traditional court another way if it's software is by layering these solutions with trusted Hardware so basically that's a DRM system and so what you can do is say that that trusted token that gets vended that token will only be vented to a hardware Enclave and that the software is built in a way that it will not boot unless that token is present so it's like a DRM token so this could actually be a way of handling video game distribution if the parent company has gone out of business so let's say you've created you know the world's greatest video game you know like Baldur's Gate 3 it finally came out a three way throwing a ball and did it justice but you you go and release it but it's not a commercial success you screwed up the whole game no one likes it but you wanted to enforce some sort of DRM system though the hazard is if the parent company goes out well then if there's a DRM system there the server to start the game the game is now bricked no one can use it but if you have a system like this that trusted Hardware Enclave can basically get a token from the contract and that token was distributed from a decentralized infrastructure so you have two centralized DRM and actually the first use case of this we think is going to happen with the Daedalus app store so you can either this idea of paid applications and layering data lists with trusted Hardware you would be able then to actually enforce a DRM schema for your games so you can imagine a free and a premium version of a game like crypto Kitty and to buy that premium version or like skins or these types of things and have them displayed you can have a system like that this can be managed on a blotching or it can be represented as a token and then the token can go into the Enclave what's also really cool about this is it also gives you an ability to resell the license and you can even have royalties go through that resell so let's say that you're just done with it you don't want to deal with it anymore you can go sell it to your friend and when it goes to their Enclave it records the transaction amount and then 20% of whatever you've resold that ip4 goes to the goes back to the parent contract and what's really cool is that this can allow you to create a limited license marketplaces and you can have this kind of bidding work where prices can go up so imagine a world where Magic the Gathering for example issued only a finite amount of cards for a certain set and those were represented by a token and that token lives within that trusted Hardware Enclave because it's within the hot enclave you can enforce whatever transfer rights you care about that can't be hacked it lives within a device that runs the code is written then every single time a person trades the card and sells it to another person the parent company would get a percentage of that transaction these are the kinds of business models you could create with that type of license so it's really exciting to see what you can do with intellectual property and you know it's really nifty Larry with zero knowledge proof you could even prove that you have a portfolio that has a market rate of X without necessarily revealing that so for example you can imagine art market places where you associate each painting with a verified token you could verify that your total art portfolio is worth half a billion dollars without revealing particular paintings lives in that portfolio to an outside auditor and we talked to the art industry a few times about there's all these people interested especially here in Switzerland they store a lot of very valuable stuff in this country go figure the mountains are Hollow when can we expect support for Daedalus with Leger the problem with support on our end is we have to wait for Icarus style addresses to come so that's coming with 1.6 they're being implemented in parallel with the decoupling and the code will be probably featured complete by the end of the month and that's good for QA and we'll roll that out once we have those addresses then it's pretty short order for support as more building the infrastructure of Daedalus and a few little things here and there - to get that in so rather than rush everybody and tread over Burton the back-end team because they got a lot of Shelley stuff to do and API is to write and so forth what I decided is to kind of swallow my pride and let you Roy be first the market it's almost like the gigahertz Wars back in the day where there was a race between Intel and AMD to get to 1 gigahertz processor and even though Intel was a much larger much more powerful company AMD was able to sneak its way to that to that benchmark with the commercial product and the same for 64-bit computing I am D was the dominant standard so anyway gyro is maintained by much smaller team but they're really passionate and they work super hard and this is their only thing to do so they're working with vacuum labs there have been a lot of progress already and as soon as vacuum finishes its job it'll be very straightforward for your ROI to add support for Ledger because they already have a method to do that via what they've done for treasurer so it's just flipping a switch for them with some other little things nook n so probably if I had to hazard a guess I would imagine unless there's a substantial delay from the third-party contractor or security flaw or something discovered that you really would be able to get that support in some time in February we're going to be a little bit behind them and I will not imagine February support maybe more March support but it just as on Shelley Shelley takes the priority I don't want to delay anything there and if you already have one wallet that you can use a ledger device with then you know that that's just an interface all the security comes from the ledger device so it doesn't really matter if you're using euro as the interface versus Dedalus as the interface you know some people would like to but you know I care much more about what one you can finally use it with our system you can also use it for staking than just the mere use of it it's just important to have at least one of the two wallets supportive and because your ROI is is focused on that exclusively they're gonna be able to beat us to market there but I still hazard that we're Intel my hope for dr. images would be as soon as would be February that's my hope and that'll be based on the Russ client that we have I'm also trying to see if we can get our DevOps to push some things through some more common package managers I'd love to see a homebrew snap craft and chocolaty package so you can just on windows go cocoa install and then suddenly you have it and the same for the other managers snap install and so forth so we'll try to get that done we all we reached out to some people in the community who are good at this stuff and have some spare cycles and it's just a matter of me cutting them a check ok I like the old you can embed images of them like David Hasselhoff for example jetski on a water ski I'm looking for one good one and thank you for the congratulations on our Tripoli symposium on security and privacy the person you really should congratulate is Thea Nisus he's a graduate student in this his first paper I think to get to Oakland and that's a major milestone for his career generally you get in in your postdoc years or post postdoc years and it's one of those things that helps you get a tenured position in academia and to do that as a graduate student especially given his life story because he didn't start as a scientist he started as an engineer he worked at Google and then he kind of came in a little bit late he's in his 30s and you don't really see a lot of people make that transition and do it successfully there's a lot of things stacked against you research is a game of the young so I am I am super proud of dionysus and his great work there's other authors on that paper like Peter Angelos but in particular that's the one that's really special Peters much more veteran and a Giles's much more veteran and they got some great CVS already enough to make people want to poach him but do he's he's a special kid and he's gonna have a phenomenal career Artie is having a phenomenal career could you give us an update on Cardinal blockchain deployment Ethiopia yes I can okay so basically the class started January 8th there are 24 or 23 students for from I can't really the exact number I think it's it's either 23 or 24 there are four from Uganda and the rest are from Ethiopia it's pretty cool in that it's an all-female class this was a request from the Ministry of Science and Technology in Ethiopia they have an initiative to try to get more women into engineering and we said well can you give us qualified candidates we don't care either way and I said no worries and I think we had several hundred candidates who applied so there was certainly no shortage of well-trained well qualified female engineers we were a bit concerned that maybe due to the nature of their education system that they would be underrepresented but it was actually the opposite it was a very competitive process five or ten for each slot and lares has just finished the first week of and they covered basic concepts and functional programming and they're gonna go all the way through everything in Haskell including monads and and how to do networks and they're an implement kind of a their own peer-to-peer network and they're gonna implement version of Bitcoin and all these super cool things but it's our first class for teaching where we're actually going to have a flutist component to the class so after they finish the Haskell component they're gonna do the Plutus component and do some cool workshop and actually because the I which case summit is happening in April we may actually fly that class out because they'll be matriculated at that point the survivors because it is a very difficult class and have them come out to Miami and do a presentation so if you guys go to Miami to the AI which case summit tickets will start coming on sale I think in February early February love to have you there it's Miami mid-april I think the 17th and 19th the web sites up though the the girls should come in there now I've AM been asked a lot well how does it fit into our broader enterprise strategy we have to learn how to teach people how to use the tools that we construct and our tools come with a deep philosophy embedded it's not about getting a million developers it's about getting a thousand good developers because when you have a thousand good people who are very good at what they do they can go build 10 or 100 good projects and maybe two or three of them survive but those that do become the Facebook's the Google's the Amazons that actually carry the entire industry it's not about volume it's about quality and it's about experiences and it's about unifying that that holistic one-click install that the user has come to expect and be able to have an infrastructure that can support all of that my job is Bill great infrastructure and teach people how to use it their job is to leverage that knowledge and that good infrastructure to actually go and do cool things so the purpose of entities like Amer go is once they're ready to go to give them the resources that they need like they're already starting to do this with the startup accelerator in New York and they'll expand that they give them the resources they need to have a shot and the purpose of the Treasury system is to all looking them the resources they need to have a shot so I want to make sure that the first crop of people coming out instead of just guessing and reading tutorials and playing around with things until they screw it up because 60% or more of the contracts on aetherium have some sort of flaw in them that's not so good that they actually know what they're doing and so that means we got to get hands-on got to get dirty got to go out there and train them in person and we did that in Athens we did that in Barbados and now we're doing that Ethiopia the unintended consequence is that we like the developers so much we've just been hiring them we hired half of the Athens class we hired most of the Barbados class and there's actually code commits and some of our best engineers are people we trained and brought up from the ranks and they're making great things in fact the two Bayesian students showed up for the delegation workshop that's pretty cool if you think that they started with nothing one was a chemistry major at Oxford and now he's a core developer for Cardona so so our hope is that this Ethiopian class will be able to master what the prior courses have mastered and actually make meaningful and significant contributions to the Cardone ecosystem now we're starting to exit the era of exclusive in-person training we're going to continue doing that but we're also pursuing partnerships to mukha fie all this content massive open online courses MOOC MOOC and basically take this and put it into the web and make it free and work with other partners so that they can make it free or very low cost and then subsidize with the foundation regimen of getting 10,000 people through that extension into the system so we'll have a core of extremely well trained people that we trained ourselves that will have access to capital and either opportunity work directly for us or and partnership with us or our partners and then we'll have a constellation of lots of curious parties who have all that requisite knowledge actually do some interesting things and I think that's a sufficiently large set of people to populate our platform you look at the success of Playstation or Xbox or any of these others systems it's not that one system had a huge amount of games on it and then suddenly that's the de-facto system you you need that Red Dead Redemption you need that halo you need the God of War you need those bellwether games to just really stand out and people are like wow that is so amazing that I'm just going to go buy that platform I'm gonna get into that platform for that particular experience and they say well since I'm already here I probably should look at what else they have or add to it myself so it's better to start with high quality and small now we might have missed time that or we might have misjudged the need to have volume and that's why we invested a lot of money into theorem interoperability so all that junk on etherion could conceivably be ported over to the EVM that we'll having run on a sidechain and its own black box to avoid it from contaminating the rest of the system and then people can use their solidity in their web 3 and whatever they come up with there and if that's what they want to do we have support for that too and at least on our system it runs faster cheaper and safer and it has a much more coherent model and it has an app store still that works with it which is an easy and good user experience so that's strategy there and it's a long arc strategy and they're working real hard some of the gals will be rolled over into the enterprise division and we're gonna be running some pilots I can spend two hours talking to you guys about how cool Enterprise is and how Enterprise is going to drive adoption of millions of people into cryptocurrencies but I've already been gone for an hour 30 minutes and I'm tired so that's for a different AMA alright well anyway thank you so much for all your time and I really appreciated this was a heck of a lot of fun good night and I'll talk to you guys soon Cheers you