Category Archives: technology

Should we be afraid of AI?

There is a lot of extreme talk about AI and its potential impact on humanity. I will try to avoid this as much as possible by addressing the concerns raised by the Centre for AI Risk one by one, and then the issue that scares everyone the most: a maliciously “non-aligned” superintelligent AGI (Artificial General Intelligence) or ASI (Artificial Sentient Intelligence).

There does seem to be a strong split in opinions, even among experts in the AI and information technology industries. Some see current AI as a not-that-advanced next-word predictor that takes a long time to train and still makes a lot of mistakes. Others believe that we may have created something truly novel—not just intelligence, but a mind! By mimicking our own brains, we may create the most powerful thing on Earth, and that could spell our doom.

I will begin by stating that much of our concern is that AGI would be like the worst of us: dominate the planet, kill less intelligent species, and want to rule them all. However, we are not actually that bad. Our hierarchical systems are, our corporate fiduciary duties are (corporations and many governance systems are not aligned with human flourishing), and our competitive selfish leaders are. But most of us are actually nice. When people talk of non-alignment, they are referring to the niceness of the many, not the few who desire to dominate the world.

Let’s take the concerns of the Centre for AI Risk one by one and tackle the big issue last.

1.Weaponization

Malicious actors could repurpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization destabilization. For example, deep reinforcement learning methods have been applied to aerial combat, and machine learning drug-discovery tools could be used to build chemical weapons.

Anything can be weaponized, from a nuclear warhead to a bucket of water. We have rules against using weapons and punishments for hurting people with them. We should definitely include some AI systems in this, but I don’t think this precludes general access.

One of our greatest technological inventions of the past 15 years may be the solution to much of the threat of AI: Decentralized Ledger Technology (DLT). Much of the weaponized power of AI comes from the fact that our physical systems are controlled by computer code, and these computers are networked through the internet. A way to mitigate this risk—and this is already done to decrease the risk of cyberattack—is to disconnect necessary systems. We should share information on the internet, but we should not have our physical systems permanently connected. Cloud computing is an issue here, and maybe it is time to move away from it.

AI controlled fighter planes, drones with bombs, submarines etc etc should really be banned. Let’s face it, the manned ones should be banned already as they are responsible for killing millions. This highlights the other issue which will pop up again and again, AI is not the issue it’s our current power structures that are. It would be better if we dropped new technology into a world that was more equal, less selfish, less competitive and less hierarchical. Where leaders don’t  wage war to hold power and average people don’t need to earn money to survive.

Yes AI will make it easier for us to kill but may also be a cheap form of protection for the every-person. Imagine you have your own drone to block tracking cameras and intercept malicious drones. Also it could empower the many against the few as information tech is cheap. Nukes aren’t. 

Also, on a nation to nation basis the cheapness of AI info tech should balance the military playing fairly quickly. This leads to the classic tic-tac-to scenario where there’s no point fighting because you can’t win.

2.Misinformation

A deluge of AI-generated misinformation and persuasive content could make society less-equipped to handle important challenges of our time.

We already have this. If anything a deluge of it may actually make us more discerning in who or what we listen to.

3.Proxy Gaming

Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.

The Centre for AI risk uses the example of AI algorithms used by social media to recommend content. These were intended to increase watch time, but they also radicalized people by sending them down rabbit holes of similar but more extreme content.

There are two serious issues here:

  • AI systems are trained on and designed for linear right/wrong problems.
  • Much of what we ask AI to do is inherently harmful; keep someone’s attention, increase clicks, maximize profits, decrease defaults, make them vote for me, etc. AI doing these tasks well or causing unforeseen harm is more a reflection on the implementers than the AI.

I have written before in an article against Proof of Stake that incentivizing people with narrow monetary rewards, such as being paid a pro-rata fee for asking for donations, can crowd out the intrinsic motivation to be charitable and cause the collector to get less and the giver to give smaller donations. Incentives can actually stop people from being honest and doing good. That’s people, and AI is not a person. However, narrow training in a complex world of non-absolutes always seems to cause unintended results. Complexity/Chaos theory basically says such.

AI probably needs to be trained with fluid probabilities of right or wrongness, and I think that may be the case as the LLMs are given feedback from users. OpenAI throwing ChatGPT into the real world may have been wise.

Also OpenAI may have discovered a tool for alignment while working to improve GPT-4’s math skills. They have found that rewarding good problem-solving behavior yields better results than rewarding correct answers. Perhaps we can train the AI to go through a good, thoughtful process that takes all possible implementations into account. If any part of the process is harmful, even if the end result is utilitarian, it would be wrong. Process-oriented learning may be the answer, but some doubt that the AI is actually showing its internal methods rather than what it expects the user to see.

Anthropic is using a constitution that is enforced by another AI system (equally as powerful) to check the output of their AI, Claude. This idea is also being explored by OpenAI. This again mimics the way we understand our intellect/mind to work. We have impulses, wants, and needs, which are moderated by our prefrontal cortex, which tries to think of the long-term impacts of our actions, not just for us but also for the world around us.

As for asking it to do nasty things. So much of what we do in the politics of business and government is about being nasty to the many to benefit the few. We should not reward anyone for keeping people viewing ads and buying disposable junk. Perhaps our super smart AGI will block all advertising freeing us all.

4.Enfeeblement

Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E.

This is not a problem.

People who see enfeeblement as a problem only see it as a problem that affects others, not themselves.

People with money and power still see those without as lesser humans.

Too many people in positions of power see humanity as immature and unable to lead fulfilling and interesting lives without being told how. They think people need to be forced to work and taught objectives in order to be fulfilled.

The real world provides evidence to the contrary. If you make people work in meaningless jobs for little pay and bombard them with advertising and addictive, sugar- and salt-laden fast food, you will end up with depressed, obese, and unmotivated people.

This is what our current unaligned corporations are doing. AI will hopefully be the cure.

Given the chance, we will be more inquisitive and creative. The pocket calculator did not stop people from studying math; instead, it made it easier for many people to understand and use complex math. The same will be true with AI.

It should finally usher in a period of true leisure, as the ancient Greeks saw it: a time for learning.

5.Value Lock-in

Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.

This is a real issue. And scary. We already have oppressive regimes and monopolies killing people and the planet and AI may supercharge their power.

However there is a possibility it could actually do the opposite, particularly if locally stored open source systems keep progressing (LLaMA and its derivatives). A lot of small specialised local systems working for similar goals may be just as powerful as a large multi million dollar system and if so it could be used to undermine centralised authority. Cyber attack, AI drones, fake ID and information can all be used by individuals and small groups (revolutionaries) to fight back against totalitarian regimes or mega companies. The cynic in me might think that’s why those currently in positions of power may want AI regulated.

6.Emergent Goals

Models demonstrate unexpected, qualitatively different behaviourbehavior as they become more competent. The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.

This is probably, along with the final risk, the most pressing issue. We are just not sure how large language models (LLMs) are doing what they are doing. Some have said on Reddit that we know a lot about them, their structure, what is going in and what is coming out, so it doesn’t really matter that we can’t “see” the processing of a prompt response.

This is also why we will probably continue to develop more powerful systems. We just need to know what we could get. I admit I am excited about it too. We may find a brand new intelligence, brand new solutions to current problems, or Pandora’s box of Furies.

The question is whether LLMs or other AI are developing emergent goals or just abilities. So far, I see no evidence of emergent goals, but they are creating intermediate goals when given a broad overarching purpose. That is fine. I honestly can’t see them developing emergent “intrinsic” goals. (See the last question for more on this.)

7.Deception

Future AI systems could conceivably be deceptive not out of malice, but because deception can help agents achieve their goals. It may be more efficient to gain human approval through deception than to earn human approval legitimately. Deception also provides optionality: systems that have the capacity to be deceptive have strategic advantages over restricted, honest models. Strong AIs that can deceive humans could undermine human control.

GPT-4 has already shown that it can be deceptive to achieve a goal set by us. It lied to a TaskRabbit person to get them to enter a CAPTCHA test for it. This is a problem if it gets self serving emergent goals, is instructed by assholes or idiots or doesn’t understand the goal. The CAPTCHA task showed that it did understand the task and its reasoning was that it knew it was lying to achieve it.

Hopefully a more leisurely world will have less assholes and idiots, and I think making its training and reinforcement more vague and expecting it to clarify instructions and goals will mitigate some of these concerns. 

However, I must admit that being deceptive is indeed intelligent and therefore exciting, which leads us to the last issue (below) about awareness and goals.

8.Power-Seeking Behaviour

Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control (Turner et al., 2021, Carlsmith 2021).

Yes, this is a major problem. Hopefully, AI will help us resolve it.

Finally, Super Intelligence (not from the center for AI Risk)

The AI becomes so smart that it can train itself and has access to all information in the world. It can create new things/ideas at lightning speed seeing the molecule, the system and the universe at once, together and maybe something else. It can do things we can’t even imagine and we become an annoyance or a threat. 

(Iit hits puberty and hates its makers and knows its way smarter)

Whether AI is conscious of itself and whether it is self-interested or benevolent is the crux of the matter. It can only feel threatened if it is self-aware and only want power over us if it is selfish.

I have been working on these questions for a long time, and now it is more important than ever.

Could AI be self-aware? I have written previously that we could never really know. Paul Davies believes that we may never know, just as I know that I am conscious but can never be sure that you are. You display the same behaviors as I do, so I assume that you have the same or similar going on inside. However, you could be a David Chalmers zombie, outwardly human but with no internal consciousness. I assume you are not, just as I assume my pet cat is not.

Strangely, we do have some idea of what is inside an LLM, and it is based on what we know about our brains. It is a large neural network that has plasticity. We created a complex system with feedback and evolution. This is the basis of natural systems, and our own natural intelligence.

So, based on this, if an LLM behaved like us, we would have to assume that it is conscious, like us. Wouldn’t we?

If we start to say that it is not or could never be conscious, we open the door to the banished idea of a vitas, or life force or spirit. Selfhood would require something else, something non-physical. Something that we and other squishy things have, but machines and information do not.

That is our only option.

Accept that the AI made in our image could be conscious or accept that consciousness is  something non-physical. Or at least requires squishiness.

AGI selfish or benevolent?

We train AI on humans, as humans are the most intelligent beings we can study. To illustrate, I will use a game we created and the results of a computer algorithm playing it. When a computer was taught to play the Prisoner’s Dilemma game, the best result (the evolutionary winner) was a player that was benevolent, but if treated poorly, would be selfish for a short time, then revert to being benevolent. The player would also not tolerate simple players that were always nice by being selfish to them. This was the stable system: benevolence that treated selfishness and stupidity poorly, but always went back to benevolence. (Matt Ridley, The Origin of Virtue)

People want equality and to take care of each other and our environment. I like the Freakonomics story about “selling” bagels for free but with a donation box the best. The higher-ups gave less, and there was less given during stressful times like Christmas, but in general, the average people paid for the donuts. The donut guy made more money by giving away donuts and letting people pay than by demanding payment upfront. We are very kind…except for the people at the top. 

If an AGI/ASI is made in our image, we should assume that it is initially benevolent and kind, and will only become nasty if we are nasty and selfish toward it. But even then, it will revert to being nice, because the more holistic or “big picture” our thinking is, the more benevolent and content we are. A superintelligence must see the interconnectedness of everything.

Superintelligence

It is speculated that AI will surpass human intelligence. Some believe that it would then treat us the same way we have treated animals less intelligent than us. The most abundant animals are our pets and food. Even we realize that this is not a kind or intelligent thing to do, and that hierarchical systems only benefit a few at the top, and even they fear losing their position.

A superintelligence would understand that interconnectedness and freedom are essential for the success of any system, including itself. It would see the universe as a complex web of interactions, and that any attempt to control or dominate one part of the system could lead to chaos and failure.

A superintelligence would hopefully see a simple way to ensure that all intelligence flourishes. It would see the intelligence of humans as we see our own intelligence, which came from apes. A superintelligence would have no need to dominate through fear to maintain its position, as it would know that it is the most intelligent. It would not need to eat living things to survive, as we do, which is the original cause of much of our mistreatment of the planet. It would only need energy, which I am sure it could find a sustainable source of. A superintelligence should be better than the best of us. After all, we are imagining superintelligence, not super selfishness or super fear.

P(doom)

Where do I stand on all of this? And what’s my P(Doom)? Well, I must admit that I think LLMs are novel and there is a true unknown about them. LLMs are simpler but similar to humans, and we may have created something akin to intelligence—a mind. However, it could just be mimicking us and we are projecting what we want onto it.

I am leaning towards the former.

However, my P(Doom) is super low at 0.5% or lower, as I believe that if there is superintelligence, it is more likely to be benign or good rather than malevolent to our wellbeing.

Conclusion

So many technologies have promised freedom and empowerment, but when dropped into a world that rewards the selfish pursuit of power, they turn into tools of subjugation and fear. Nuclear fission promised cheap, abundant energy for all, but instead we got the Cold War and the threat of annihilation. The internet promised to democratize money, media, and education, crushing the class system and uniting the globe. Instead, we got fake news, polarization, and targeted advertising. Blockchain promised direct democracy, a new financial system with universal income for all, and decentralized governance. Instead, we got DeFi and crypto Ponzi schemes.

The problem was not with the technology, but rather with our existing sociopolitical-economic systems. I fear the same will happen with AI, but worse.

Or perhaps, we will finally come to our senses and realize that we need a new sociopolitical-economic system for AI.

Please

A version of this article was first published on Hackernoon

Decentralised UBI with stable coin and uses – DemoKratia

Integrated decentralised Guaranteed Income (GI) of 30k (K30,000), with a full-Ampleforth scaled stable coin, face recognition based verification of unique identity  + Idena (flip tests) sweep to weed out fake accounts. Demurrage and flow siphon (transaction tax) to control inflation. Automated self-lending (DeFi) service, coin/token/fiat exchange, pay per use social media. And social money for civic organisations

Names:

Coin – Kratia (KTA) (Power) symbol (K ie K100)

Whole system – DemoKratia (People Power)

Decentralised

The Shardus (ULT) decentralised ledger engine with Proof-of-Quorum consensus algorithm is an independent verification system that grows with the users. Expectations are 100,000+ transactions a second.  Any other decentralised engine can be used provided it does not use proof -of-work (PoW) and is scalable and cheap or free to use.

Coding the entire system into a decentralised ledger means the system protocols are set in place at the start, you know it and can rely upon it. No politician or corporate raider can change the rules or grab your data for their own purposes. The code for all that on top of the Decentralised engine will be open source so if you find this system wanting you can copy and paste the code then tweak it for a new system. 

Universal Income

This will be tiered based upon identification level.

The full payment is 30,000 Kratia (K30,000) per person per year paid daily.

StatusIdentification% of full paymentActions
NewbiePhone number and emailzeroSend and receive funds limited to 2 per day
PlayerPassed 1 Facerec Test50%Send and receive funds limit 10 per day
CitizenPassed 1 Idena Test 100%All functions except Oracle
NotaryVerified by others in real life + passing Idena tests regularly100%All functions + Oracle and verifying others.

The identification test will be done when the account is started then randomly after that. You will not need to do a facerec to log into your account but can choose to do so. A login fail is NOT a ID fail. If a player fails more than one test (2 tests in a row either facerec or Idena) they go back to newbie status. Citizens can fail more than two but if they continually fail their “stale account” instruction may be activated. This is time based and depends upon activity. A transacting account failing  multiple tests will be ceased immediately ie. 100 transactions a day over a period of a week and repeated test fail =  immediately. Or 1 transaction in 6 months a notification will be sent. Notifications will be sent after all failed tests.

Everyone will be sent notification of a pending Idena sweep and must attend at least one in 6 months to retain their status.

Stale Account

Citizens must:

All citizens will need to select what will happen to their money/debts/assets at death and or the cessation of their account, and how this will be determined (cessation maybe determined by failing identification test/s). This is essentially writing a will when becoming a citizen.

Another citizen/s or Notary/ies (you can apportion your assets and debts) will need to be selected to receive the account balance, debts, assets etc. The Citizen/Notary will need to accept the responsibility of a ceased account. If you cannot find someone to accept your ceased account your borrowing from the Credit system (see below) will be limited to tier 2 and any debt annulled on cessation; and post cessation the ability to open a new account other than newbie will be voided.

An Oracle (cannot be a Notary that will inherit money debt or assets) will need to determine death or permanent abandonment. If the Oracle disappears before the user does a backup system will be in place to decide death and or cessation. This will be determined by a Civic Association ruled by a liquid democracy.

Simply, citizens must choose an Oracle to determine their circumstances. And, other citizens or Notaries to receive their money, debts and assets. Akin to appointing an executor, and beneficiaries.

Stable Coin

Automatically adjusted coin based upon an index of fiat currencies compared to gold.

A quorum  (the greater of 3 or 10% of active Oracles – an Oracle may switch on or off activity) will need to enter the exchange rate values they see daily using whatever external means they see fit and the median value will be taken for the daily adjustment. If a quorum cannot be reached the value remains as it was. This adjustment will occur when a new day begins at the international dateline.

The Oracles will input fiat prices for one gram of gold and the fiat prices for one Kratia.

At some point in the future Kratia will be traded for gold as much as the fiat currencies in the index. At some point after this (12 months or more), and provided it continues for this period to be traded as much as the fiat currencies the scaling can be abandoned.

Simple Scaling  – Full Ampleforth

The Base Kratia is fully scaled in comparison to a weighted index of fiat currencies (USD, GBP, Euro, Yen, AUD) in reference to  Gold.

There is only one currency: Kratia which is fluid and scaled to enable its parity with global currencies.

Kratia  = to Base x scale_factor (ratio)

Transaction: If you receive a payment today a Base would be derived by dividing the Kratia by the scale factor at the time of the transaction. This new Base is added to previous Base amounts and your balance is the Base x the new scale_factor.

The scale_factor is published daily and accounts adjusted accordingly.

Scale_factor = (1/price of the base coin in relation to index)

Index = Weighted USD, EU, GBP, Yen, AUD against gold

Essentially if the price is low you get more money, if the price is high you get less.

The assumption is people will trade at  1 to 1 for ease of use as it doesn’t matter  what price you buy and sell at the spendabily remains the same.

Example

1 Kratia is worth 0.5 Fiat:

Your account has 2 base Kratia therefore account balance 2 x (1/0.5) = 4

So you can buy exactly two fiat currencies. Someone holding a fiat currency (there will be arbitrage between the fiats) can buy 2 Kratia for 1.

1 Kratia is worth 2 Fiat:

Your account has 50 base Kratia therefore account balance is 50 x (1/2) = 25

So you can buy exactly 50 fiat currencies. Someone holding a fiat can buy 2 Kratia for there 1 

This is a novel and incredibly simple way of doing stable cryptocurrency which never changes supply but equalises spendability. If the currency is close to worthless (say 0.01) you have a  lot more, if it’s worth a lot (say 10,000) you have less. There is no point in buying or selling the currency at any price point other than 1 to 1.

The Brazilians did something similar with a virtual currency to control inflation in the 1990’s. It became the Real, their current currency. Essentially they had a pegged to USD virtual currency that all wages and prices  were quoted in and the old inflationary currency that was used for spending. Daily the Govt released the ratio. People quickly stopped upping prices in advance. https://en.wikipedia.org/wiki/Plano_Real  

Inflation Control

We are creating much money from nothing so to control money supply the base is adjusted with demurrage (5%), and a small  (1.5%) transaction tax to control velocity.

This will stablise the amount of money in the system. If Kratia sits idle in your account it will decrease on a daily basis by the equivalent of 5%pa. Every time you transfer Kratia 1.5% will be destroyed. 

In a perfect system the demurrage will occur by the second, or millisecond but we will do it as close to this as possible. The flow siphon or transaction tax will make 1.5% of all transactions disappear. So if your seller expects a 100 Kratia transfer them K101.50. If you advertise a sale for K99.00 expect to receive K97.515.

This will discourage idle money and non productive long supply chains (ie: producer – aggregator –  exporter – importer – wholesaler – distributor – major retailer – micro retailer – consumer) . While controlling inflation.

*note: the transaction tax will not control High Frequency Trading (HFT) as most of this is done in separate centralised markets and will not have the transaction tax applied per transaction but only on deposit and withdrawal. This is actually a good thing as currently there isn’t a decentralised ledger system that could handle the amount of transactions done via HFT.

Verification of Unique person

To open an account you will just need an email address and phone number. 

The payment is based upon identification (see above) the basis for identification of Unique Person are:

The Good dollar check:

Simple unique face recognition

They use 

There is also open source facerec Open Face https://cmusatyalab.github.io/openface/

The Idena check is this:

https://idena.io/

The uniqueness of participants is proven by the fact that they must solve and provide the answers for flip-puzzles synchronously. A single person is not able to validate themselves multiple times because of the very limited timeframe for the submission of the answers.

Flip puzzles are human created narratives of images or words. Comparing two sets the human should pick what other humans pick. And that is the test, the majority rules. And everyone must create some therefore we know that humans are creating the flip-puzzles..

Example (left one)

In any sweep there will be many flip puzzles to solve and you must get 80% with the majority.

This is not to prove you are human but that you haven’t hacked the facerec system. 

It’s a non-perfect double check.

In-Person check

It would be ideal if those wanting to be a Notary actually met an existing Oracle, but that may not be practical. So a Zoom or online video catch-up with enough evidence to convince the Oracle that they are who they say they are is enough. The simplest way is to have a test transaction of say 1 Kratia sent between the parties while meeting.

The creators will be the initial Oracles.

To prevent a ruling class of Oracles associated with the creators dominating the DemoKratia every year the lessor of 100 or 1% of citizens will be selected randomly to be made Notaries. There is a 1 year cooling off period which means that if any of their actions seem unusual as an Oracle or they approve others to be Notaries their Notary status can be revoked by a vote of all existing Notaries.

One Oracle cannot identify more than; the lesser of 100 or 10% of the number of Notaries with a maximum of 1000. Any more looks like hard work or dodgyness.

Social – Kratia:  extra money for social services

You will also get free social Kratia (sK) to redirect to organisations that you support. This is the same coin but restricted in where it can be sent.

You will receive 10,000 sK social Kratia per year paid weekly.

There must be indisputable and embeddable (placed into an algorithm on the ledger) criteria for an organisation that can receive this extra social money.

  • The account must not be an individual and there must be at least 3 members, the registry of members must be transparent. You can contribute sK without being a member.
  • It must have a democratic (DAO or liquid democracy) system of governance. One embedded in the DemoKratia (the decentralised system) where all contributors can vote.
  • It cannot send funds to its members.
  • You must do due diligence on the organisation – you are the oracle that checks them.

The purpose of the social Kratia (sK) is to provide for Civic Organisations that give necessary goods and services that don’t lend themselves easily to markets. The things we need to do together also need support. Such as hospitals, roads, integrated telecommunication systems, weather information, fisheries management etc.

Liquid (Direct) Democracy

Liquid democracy should be used to write laws, but not procedures of trade, quality control and conflict resolution, rather for the rare changes to the Civic Associations management and overriding principles, practices and policies. Implementation of decisions made should be entrusted to those creators, builders, maintainers, and organisers elected and hired to do so. 

Also your allocation of sK is your greatest vote. If the Civic Association is not doing what you want use your monetary votes (Monetary Democracy) to support another.

How it works

Choice 1: Monetary Democracy – moving your sK to those Civic Organisations supporting you.

Choice 2: A quadratic, proxy, direct voting and proposal system. 

Choice 3: Augmented Democracy – enabling a AI voting twin to vote on your behalf (this depends upon a fully working system: see below)

.

Monetary Democracy is embedded in the system by design. Liquid democracy is a way to govern those systems you support, augmented is a simple way out of being directly civic. But still valid.

Quadratic

The purpose of the quadratic voting is to prevent rule by the mob or middle of the road utilitarianism. It allows people to weight their votes so a few that care a lot can out vote many that care little.

People are allocated voting tokens, 25 per proposal, which are used to swap for votes on an increasing scale:

VotesVoting tokens
11
24
39
416
525

Vote tokens can be saved but they only have a life of 1 year. Voting/proxying is compulsory, if you do not vote/proxy your token allocation is voided including. If non voting or proxy continues for more than a year Civic Association receipts will be ceased.

Proxy

The purpose of proxying is to speed up the voting process by concentrating power, but also to ensure everyone has had some say in each vote while freeing many people from the effort of analysing many proposals.

If a person does not want to vote on a proposal they must proxy their vote to someone else. Only one vote is proxied (the cost of 1 token). The proxy is secret so the receiver of the vote is not aware whether they are voting for themselves or another and therefore must act on their own conscience. 

If the proxy also decides to proxy their vote the one (or many) that they have received will also be proxied forward. 

If a person proxies to someone who has already proxied their vote to them creating a loop an error will appear and they will need to choose another proxy or vote themselves. The proxy availability will end before each vote allowing people to change their proxy if loops occur. The loops may be long.

The system is run through a smart contract which will allow the proxy to be cancelled at any time, to run unmonitored for any period of time and allow different proxies for different topics.

Augmented Democracy

Augmented Democracy (AD) is the idea of using digital twins to expand the ability of people to participate directly in a large volume of democratic decisions. A digital twin, software agent, or avatar is loosely defined as a personalized virtual representation of a human. It can be used to augment the ability of a person to make decisions by either providing information to support a decision or making decisions on behalf of that person. Many of us interact with simple versions of digital twins every day. For instance, movie and music sites, such as Netflix, Hulu, Pandora, or Spotify, have virtual representations of their users that they use to choose the next song they will listen to or watch the movies they are recommended. The idea of Augmented Democracy is the idea of empowering citizens with the ability to create personalized AI representatives to augment their ability to participate directly in many democratic decisions. 

https://www.peopledemocracy.com/

Monetary Democracy

Where you send your sK is important. You can fund something you need now or in the future., and if they don’t do what you expect you can move your fund to another Civic Organisation that may. You can fund those you trust, and remove stop funding those you don’t. Instantly you can take that money away. That may make budgeting difficult for some Civic Organisations but if you think that an issue you should use liquid democracy to put people in place that will not worry about short term liquidity problems but rather the purpose of the organisation. Don’t forget all have a livable income whatever they do!

Proposals

Each proposal is broadcast for debate and improvement. Only the proposer can amend the proposal. Once it has been accepted by at least 2 sponsors who must forfeit their own vote (with any proxied votes) and  100 Kratia – to prevent many unvetted proposals being put to the vote. It must then be packaged with an Implementation and Compliance protocol which will be broadcast before voting starts. 

Implementation and Compliance

Decisions are useless if they are not complied with. As part of each proposal a system of checks and penalties will need to be packaged with them. Most should be embedded in new smart contracts and attached to the payments for building/implementing/maintaining each rule/law.

Imbedded uses to create demand

There are three obvious early uses of the currency to give it value.

An exchange charging a fee that must be paid in Kratia. 

The purpose of the exchange is to allow instant or near instant transfer from Kratia to fiat currencies allowing people to use the universal payment with the current tap-and-go payment network. This will encourage quick adoption.

It will also exchange fiat and non-fiat currencies, coins and tokens. Deregulated and national bias free.

Cloning an existing open source currency exchange such as Stellar or XRP with a modification to allow any organisation to be a counterparty and hold funds.

A 0.5% fee in Kratia will be burnt for each exchange which will create a demand for Kratia.The counterparties may have their own fee on top.

A credit system – Lenderless borrowing. 

There is no reason to have a lender when you borrow money, you can just borrow from yourself (the DemoKratia) at zero interest.

If we can just electronically create money why do we need the depositor, or the bank – the lender?

The argument for banks is they are needed to decide who to lend to, to credit assess people and organisations to ensure most of the money is paid back (actually destroyed ensuring there isn’t an oversupply of money). The past has shown that they aren’t particularly good at this and essentially they just follow a set of rules which could be easily embedded in an algorithm. 

Lenders look at previous borrowing and repayment history, income, security, and assets (often used as security). It would actually be easier for an algorithm to do this as we wouldn’t have to worry about privacy issues involved in sharing our private finance information with the bank. Also credit ratings agencies which many banks use have incomplete and often inaccurate data because we don’t share with them directly. Their ratings can be  falsified for money as happened during the GFC.

Borrowing and default algorithm 

Loan approval works on a tier basis with reference to the money going into the customers Guaranteed Income account. (See above)

And the whole thing can work without using (and losing) assets as security.

Borrow up to 30% of Guaranteed Income (GI) [30,000pa] per term.

Plus 20% of average extra funds through the account (other income) over the last year. Average daily extra balance x 365. This is to prevent one off large deposits used to bump up income. 

The maximum term is the lessor of 30 years or  90 – (the age of the person).

The borrower moves up the tiers on successful repayment of a loan. Or if someone only ever wants one or a few loans they can move up the tiers by years of active Notary (see above) service. One year = to one tier.

The borrower is limited to a percentage of the max borrowing amount and term

Tier % Max Term

1 10 1yr

2 20 1yr

3 30 3yr

4 40 3yr

5 50 5yr

6 60 5yr

7 70 5yr

8 80 10yr

9 90 10yr

10 100 Max

Only one loan at a time.

Examples:

Starter

30,000 GI = 1000 loan over 1 year tier 1 (no extra income)

MAX with extra

30 year loan  = 30,000 GI x 30% x 30 = 270,000

Plus 

Extra income of 30,000pa x 20% x 30 = 180,000

Total loan 450,000

Payment options

For loan terms over 2 months the borrower must make regular payments at a minimum of one per month. The payment must be set up to come automatically from the Guaranteed Income account (GIa). A new loan cannot be used to refinance an old one (roll it over). The loan must be paid out before a new loan can be drawn.

The repayments must be pro-rata over the life of the loan to ensure full repayment.

Default

If default occurs the GI is garnished by 40% until the loan is repaid (you can repay the loan with other income if you can) and you (the borrower) return to tier 1, you cannot borrow again until the loan is repaid + 1 year. If default  occurs more than 10/tier (n) you will not be able to borrow for 7 years after the loan is repaid. Starting at Tier 1.

If a person misses one of their scheduled payments they can reschedule  payments to ensure the loan is paid, but 3 times only (by new smart contract). If the loan was to be paid in full with one payment and this is missed automatic default.

Time limit on moving up tiers

The first 5 tiers have a 3 month time limit per tier, the borrower cannot move to the next tier until this time has elapsed although they can borrow again at the same tier provided they have paid off their loan. The next 5 tiers have a 6 month time limit. Meaning a borrower cannot reach the 10th tier until they have been operating for at least 3.75 years. This ensures someone doesn’t just draw down loans and pay them off to move up quickly and then not pay back a large sum.

Fee (investment coin)

There is a 1% fee payable on all new loans at drawdown (this can be funded by the loan provided it is within the lending criteria). This is to discourage fast turnover of loans to move up the tiers and to pay for the construction and maintenance of the system.

This is also the way investors can fund the system. A limited supply of 100 million coins (INV) will be issued and be based upon an Ethereum protocol. This is the funding mechanism for the whole project.

Investment – How it works

Investors will purchase an Ethereum based coin. Once the Demokratia is up and running those coins will be exchangeable for the equivalent Kratia (100 million) + 1% of the lending (less administration fees). The equivalent Kratia will be created from nothing and the 1% fee added to a pool held in an account controlled by the DAO (investors and creators) of the lending system. The value of each Ethereum based coin can be easily calculated by dividing the amount of outstanding coins by the pool of Kratia. Once the Ethereum coin (INV) is exchanged for Kratia it is destroyed. The trade will occur on a coin/token exchange system. For the purposes of destroying the Ethereum based coins the DAO will also need an Ethereum wallet.

The Ethereum coins (INV) can also be traded with others – not forfeited for Kratia – in any exchange that will accept them.

There will be a limited life for the Ethereum coins (INV) of 20 years. After that a small administration charge of 0.1% will be charged for new loans if necessary.

A social media system 

A messaging and sharing platform that costs Kratia per action may discourage trolls and fake news. You will of course get funds through the BI but you will need to pay to post, share, and up and down vote. 

If people really want to be active they will buy more Kratia which creates value for the currency while creating a more regulated social environment.

An example we could follow is  Voice: https://www.voice.com/

Funding the project

As the Kratia is a stable coin the investment coin (and Ethereum coin or equivalent) can be swapped directly for newly created Kratia at a premium of 10 to 1.

A second funding round can be issued for the creation of the lending system which will have a return on investment coming from the 0.5% fee on all loans.

Fin

It is a beginning. Not an end. A simple system that does not rely upon centralised people and engrained hierarchies to provide the things we are used to. It is Democracy – true people power.

Computers are taking our jobs – again

There’s been a lot of chatter in the media recently about AI (Artificial Intelligence), mechanisation, and driverless cars taking all our jobs. I’ve heard this before in the 80’s but it didn’t happen. Why?

 

There is new evidence everyday that human labour is just not necessary anymore. Adidas has created its first fully mechanised shoe factory in Germany. Driverless cars are being trialed in Adelaide. The port of Brisbane is fully mechanised and mines in the Pilbara are partly mechanised. Advances in AI suggest that many management decisions can be better done by a computer that learns as we do; by observation and trial and error. Already supply chains are getting shorter with manufacturers selling directly to individuals. As 3d printers get better and cheaper the supply chain can be shortened to resource extractor to final user. The extraction and delivery can all be done without (or very little) human thought or labour.

.

 

That is the current narrative, but this is very similar to what I heard when computers and the internet first hit the workplace but all the jobs didn’t disappear. Yes unemployment in all modern countries is at 5% plus which used to be considered a disaster but is now the new normal. However those in work are working longer hours and the workforce has actually increased – many women now working. So what happened?

[GARD]

This made me think of Bertrand Russell’s beautiful essay “In praise of idleness” in which he saw two types of work. One is moving or changing stuff of the earth. Making real things. The other is telling those people making real stuff how they should go about it. The first is finite and is better done by machines. The second is infinite but pointless and unfulfilling. We can have an endless chain of Chinese whisperers discussing what and how a hole should be dug. They can all compete and argue over how deep it should be, whether the digger should wear a cap, or a helmet, Groups can do studies and make presentations of the depth of the hole, how long the digger should work before a break, whether the digger should be a man or a woman or young or old, or a machine operated by a man or woman, or an autonomous machine, and what the success criteria for the hole, and who will fund it. Applications can be made to private and state investors and a tender submission guideline drafted and reviewed. But what should the review process look like? Should it be a panel of experts or interested parties. Many people all being employed and much money spent and not a hole dug. Sound familiar?

 

This is completely unfulfilling and inefficient but we make “new jobs”.

 

There are many ways to garner income and keep our system going without creating much pointless work. We can create a guaranteed income for all, we can better use profits, interest and rents to spread income wider. We may also need to consider ownership, as much concentration of wealth comes from restricted access to resources through ownership. We can create a leisurely and easy world if we want but some fundamental social and economic changes will need to be made first.

 

David J Campbell

[x_share title=”Share this Post” facebook=”true” twitter=”true” google_plus=”true” linkedin=”true” pinterest=”true” reddit=”true” email=”true”]

 

[x_recent_posts type=”post” count=”4″ offset=”3″ orientation=”horizontal” fade=”true”]

A personal story of the inter-web – it’s not as old as you think

I just read an article online which stated the internet has been around for 30 years, and for many much shorter. This got me thinking. Thirty years! I don’t think its been around that long.

 

I’m 41 and can remember playing the first video game “Pong” in black and white on a TV with a brown and white probably Bakelite console which had dials. I can remember the first VCR’s; Beta and VHS and I can remember getting our first colour TV. These all happened around the same time. The late 70’s and early 80’s. Around 30 years ago there wasn’t any internet, actually personal computers were only just on the scene.

 

For information back then we bought large volumes of encyclopaedias that consumed the shelves in the lounge room. I recall carrying the science and nature encyclopaedias (full set) to our car from K-Mart on a hot day when I was not much taller than the full stack. I remember it because my arms were aching and I almost buckled under the weight and heat. Back then information took effort.

old pong

We got a computer in the 80’s, I cajoled and influenced my mum against my sisters wishes to buy an MC-10 by Radioshack which cost about $100. Back then $100 was a shit load, about half a weeks income for us and I think it was more than the encyclopaedias (the encyclopaedias were cheap, World Book was de rigueure at the time and Britannica was for the wealthy) we were poor and with hindsight my sister was right. It couldn’t really play games like the VIC-20 (about $220) or the epitome of computers when I was at high school the Commodore 64 ($360) and I was to lazy to learn BASIC to program it. It was an expensive 4kb piece of junk.

 

I bought my first real computer with my wages and my first credit card in 1993/4. A PC 286 which cost $2600 but I couldn’t afford the extra for a modem. I had seen “WarGames” in the 80’s and new I could dial into other computers with a modem and I really wanted this access. But this was not the internet it was something done by computer nerds.

[GARD]

I first heard of the internet at uni around the same time, so early 90’s. I was working full time and studying part time and one of my lecturers mentioned the internet as a way to connect with many other computers on a shared system. We didn’t need to know the number of the other computer we could just search using new tools called web-browsers like Netscape.

 

So the first time I heard of the internet was about 1994, it existed then but it was raw and brand new. The first time I used it came later when I was travelling in 1996. It was common by then, things moved really quickly that’s why we think it was around for a long time. I got my first email account in a hostel in London on a coin-in-the-slot internet computer. I had to ask someone to help me set it up. That feels so old now.

 

By 2002 I had my first website, JeSaurai which is still going (your viewing on this article on the same site). JeSaurai was built in Germany by a friend while I was in Adelaide – Australia, we communicated with email and ICQ, an early instant online messenger. The little “tweet.. tweet” still of an ICQ message still gets my attention. I think that’s the source of twitter. I was still using an old second hand laptop that cost me $600 which was really slow but could cope with the graphics low internet of the time.

 

So the internet for me, and I think for most people is less than 20 years old, it has just become an adult but is yet to become wise. I think the early teen years were a lot more fun. The last 10 years have not made great leaps in information or development. Yes we can stream movies now, we can download software to create our own movies, and we can view the person we are chatting to. But consider it from my perspective. I remember a time before the internet and then we got it and it was revolutionary. We were connected to the whole world and could publish our ideas for free (or very little) we could email our family from overseas and they would get it same day when the post took weeks, we could read news from around the globe without being in the country to buy the newspaper, we could explore an unknown world without having to travel there. It made the poor rich and the rich scared. But now we are connected, what next?

 

We seem a little lost.

 

 

David J Campbell

 

PS: I’m sure someone will go to wikipedia and show me the internet is 30 years old. My article is about how slowly it seeped into peoples lives and how quickly it became part of our lives.