How to Use Machine Learning to Trade Bitcoin and Crypto
How to Use Machine Learning to Trade Bitcoin and Crypto
How to make profits in cryptocurrency trading with machine
Using machine learning for cryptocurrency trading
Machine Learning, a Specific AI Application, Poses
Crypto-ML Machine Learning for Cryptocurrency Traders
A place for redditors to discuss quantitative trading, statistical methods, econometrics, programming, implementation, automated strategies, and bounce ideas off each other for constructive criticism. Feel free to submit papers/links of things you find interesting.
What's the consensus in this sub on machine learning for crypto trading?
Is there a consensus among experts on whether it's a waste of time / scam or it is legit? I've seen a few sites selling signals or auto traders based on machine learning. I'm very familiar with ML but I've never used it for crypto trading and I'm wondering what you guys think In particular I'm not interested in high frequency trading, I think server proximity is too much of a factor there, I'm looking more at a few trades per month.
What machine learning libraries and methods would you recommend for creating a predictive trading bot with 3-10TB of crypto market data?
I know not all of them scale well. I'm learning to do some stuff with scikit-learn, but will it work on enormous data sets? Is there something else that would scale better? Basically, it's a bunch of crypto trading data, probably something like 500-900 million trades, and probably half as many order book snapshots (trades that have not met a match yet) for a few different currencies, which each have between 200 and 8000 orders each. Anyways, I'm planning on getting into using gpu's for this purpose as well later on if that factors into the preferred library.
How I applied Buffet's strategies to my own portfolio, +70% networth, beat SP500 by 40%
I believe I did pretty well in the market this year. My networth increased ~65% since its lowest point in March, ~350k to 620k. 20k from the car I bought in March. I rolled over a 401k and it messed up Mint's reporting, hence the spike from Jul -> Aug. I beat the SP500 by 40% in my YOLO account, my FAANG account went from 180->300 I did this by following some basic investing principles, buying and holding for the most part, being patient, and only investing in areas which I have expertise in. I did not buy into the TSLA hype, nor do I play options, nor do I play crypto.
Most news is noise, not news (don't read articles about investing)
The best moves are usually boring (buy and hold)
Only listen to those you know and trust
I firmly believe that anyone who follows those concepts, they will find success in investing.
Keep emotions out of the market
Don't bother timing the market. Don't get ruled by FOMO.
Understand that for some stocks, you can't really average cost down. You will have to stomach buying the stock at a higher entry point. My refusal to average up early on caused me to miss out on a lot of gains.
Understand the difference between trading, investing, and gambling.
Have an exit strategy (stop losses would have helped me a lot in March, I now learned from my expensive mistake)
Be greedy-- not TOO greedy. If a stock pops 10%, I will sell half to lock in profits. It's super common to see a lot of companies pop and the next day dip a bit due to sell off. Perfect time to grab more on the dip. This is obviously impossible to time, which is why I only sell half.
I was very specific in the types of companies I would choose to invest in within tech. I decided to follow my strengths. As a data engineer, I'm very intimate with cloud technologies, and I think I generally have pretty sharp business acumen and good strategic direction. As a result, my day to day work had me using a ton of technologies in the cloud space. I've used Splunk, NewRelic, Twilio, AWS, GCP, Hortonworks/Cloudera, Oracle, Tableau, Datadog, Sendgrid (bought by Twilio), Dropbox/box, Slack, Salesforce, Marketo, Databricks, Snowflake, HP Vertica, just to name a few. I was familiar with CDN services like Fastly and Cloudflare because sometimes, I worked with the DevOps and IT guys. Based on industry hearsay, day to day work, eventually, I got a good "feel" of what technologies were widely adopted, easy to use, and had a good reputation in the industry. Similarly, I also got a feel for what tech were being considered 'dated' or not widely used (HP, Oracle, Cloudera, Dropbox, Box). I tend to shy away from companies that I don't understand. In the past, most times I've done that-- I got burned. My biggest losers this year was betting on $NAT and $JMNA (10k total loss). After learning from those mistakes, I decided to only focus on investing in companies that either I or my peers have intimate first hand experience with using. Because of this rationale, the majority of stocks in my portfolio are products which I believe in, I thoroughly enjoy using, and I would recommend to my friends, family, and colleagues. Post COVID, due to the shift to remote work and increase in online shopping I decided to double down on tech. I already knew that eCommerce was the next big thing. I made very early investments into SHOP and Amazon in 2017 for that reason. My hypothesis was that post-COVID, the shift on increased online activity, remote work, and eCommerce would mean that companies which build tools to support increased online activity should also increase. I decided to choose three sectors within tech to narrow down-- these were three sectors that I had a good understanding of, due to the nature of my work and personal habits.
eCommerce + AdTech
IT/DevOps (increased online activity means higher need for infra)
FinTech (increased shopping activity means more transactions)
These are the points I consider before I consider jumping into a stock:
Do I feel good about using the company? Do I believe in the company's vision?
Where do I see this company in 5 years? 10 years? Do I see my potential children being around to use these companies?
What does YoY, QoQ growth look like for this company?
Is/Will this product be a core part of how businesses or people operate?
Who are their customers and target demographic?
(SaaS) Customer testimonials, white papers, case studies. If it's for a technology, I'm going to want to read a paper or use case.
In March, I took what I believe to be an "educated gamble". When the market crashed, I liquefied most of my non tech assets and reinvested them into tech. Some of the holdings I already had, some holdings were newly purchased. EDIT^ this isn't called timing the market you /wsb imbeciles. Timing the market would be trying to figure out when to PULL OUT during ATH and then buying the dip. I SOLD at the lowest point, and I with the cash I sold AT A LOSS, I reinvested that cash and doubled down into tech. If I sold in Feb, and bought back in March, that would be calling timing the market. What I am doing is called REINVESTING/REBALANCING... not timing the market. I have 50% of my networth in AMZN, MSFT, AAPL, GOOG, FB, NFLX, and the rest in individual securities/mutual funds. I have 3 shares of TSLA that I got in @1.5. Here are the non FAANGs I chose.
$SQ. I had already been invested in SQ since 2016. I made several bad trades, holding when it first blew past 90 until I sold it at 70... bought in again last year at 60s, after noticing that more and more B&M stores were getting rid of their clunky POS systems and replacing it with Square's physical readers. After COVID, I noticed a lot of pop up vendors, restaurants doing take out. A Square reader made transactions very easy to make post-COVID.
$ATVI. Call of Duty and Candy Crush print money for them. I've been a Blizzard fanboy since I was a kid, so I have to keep this just out of principle.
$SHOP. They turned a profit this year, and I think there is still a lot more room to grow. It's become somewhat of a household name. I've met quite a few people who mentioned that they have a Shopify site set up to do their side hustle. I've tried the product myself, and can definitely attest that it's pretty easy to get an online shop up and running within a day. I 5.5xed my return here.
$BIGC. I bought into this shortly after IPO. I'm very excited to see an American Shopify. BigC focuses on enterprise customers right now, and Shopify independent merchants, so I don't see them directly competing. I'm self aware this is essentially a gamble. I got in at 90, sold at 140, and added more in 120s. I def got lucky here... it's not common for IPOs to pop so suddenly. I honestly wasn't expecting it to pop so soon.
$OKTA. Best in class SSO tool. Amazing tool that keeps tracks of all of my sign-ons at work.
$DDOG. Great monitoring tool. Widely adopted and good recommendations throughout the industry. Always had a nice looking booth at GoogleNext.
$ZM. Zoom was the only video conf tool at work which I had a good time using. Adoption had blown up pre-COVID already in the tech world, and post-COVID, they somehow became a noun. "Zoom parties" and "Zoom dates" somehow became a thing interwoven into peoples' day to day lives.
$TWLO. Twilio sells APIs which allow applications to send messages like text, voice, and video chat. For example, when DoorDash sends you a text at 1 AM reminding you that your bad decision has arrived, that text is powered by Twilio. In March, New York announced that they were going to use Twilio to send SMS notifs for COVID contact tracing.
$NET/$FSTY. These two two seem like the ones best poised for growth in the CDN space. This is based off of industry exposure and chatting with people who work in DevOps.
$DOCU. people aren't going to office to sign stuff, super easy to use, I like their product.
$WMT. eComm, streaming, and a very substantial engineering investment makes me think they have room to grow. Also I really need to diversify.
$COST. When is the last time you heard someone say "Man I hate going to Costco and paying $1.50 for a hotdog and soda?" Diversification. Also cheap hotdogs.
$NVDA/AMD. GPUs are the present and the future. Not only are they used for video games, but Machine Learning now uses GPU instead of CPU to do compute (Tensorflow for example). Crypto is still a thing as well, and there will always been a constant need for GPUs.
Mutual funds/ETFs 1. $FSCSX. MF which focuses on FinTech.
$VTSAX Pretty much moves with the SP500.
$WCLD. Holdings include Salesforce, Workday, Zuora, Atlassian, Okta, New Relic, Fastly...
Titanvest: I was an early access user, and I was able to secure 0% fees for my accout. 36% gains so far. I like them, because their portfolio happens to include shares of tech giants that I either don't have individual stocks for or my stake is low (CRM, PPYL). It nicely complements my existing portfolio.
Some things I do that that are against the grain:
Not really diversified. 80% is in tech. They are in very different sectors of tech, but the truth is, when tech falls, all of these companies fall. I'm obviously long tech and I do not believe that tech will fall anytime soon. What about the dot com bubble? There wasn't a single dot com company that was integral in our lives. The internet was in its infancy then. Techonology is now such an interwoven part of our lives and I see companies like Apple, Amazon, Google to be sticking around for several generations.
I don't read investing articles. I think people who write articles about a stock all have ulterior motives-- to pump or to dump. Case in point-- Citron Research spent years writing articles telling people how SHOP was overvalued. Why did they do that? Because they were shorters at the time. I turned 5k into 27k, because I held on to most of my SHOP shares.
I don't take much value from balance sheets, other than net loss, income, YoY growth. Instead, I use my business acumen to try to pick up on info that isn't super apparent from Google. For example, one thing I always do is that I look at the career page to see how the business is growing. Increase on marketing/sales/implementation engineers is typically a solid sign that a company is preparing headcount to take new deals in the upcoming quarters. I look at the product road map, supported integrations, and customer base.
One example was how I applied the above principle was to WalMart. In 2018 I noticed that I was getting targeted by a lot of Data engineering job listing for WalMartLabs-- WarMart's tech division. The role was to build out a big data pipeline to support their eCommerce platform. WalMart's online store released in Q3 of 2019. Post COVID, I used their online store and it was a seamless experience. They even offer a 5% cash back card like Amazon. They reported strong Q4 sales last year, and they did very well post COVID. Why did I choose to invest in $WMT? Because I believe that Wal-Mart has room to grow for their online platform. Lastly... remember that wealth isn't accrued over time. It takes years to build. The quickest way to increase your wealth is by investing in yourself-- your career and earning potential. The sooner my income increased, the quicker I had more capital to buy into stocks. Also, if you've gotten this far, the point of my post isn't to say that you should invest into tech. The message I'm trying to get across is-- when picking companies, pick companies in fields or verticals you have good knowledge in. Heed Buffet's advice to only pick companies you believe in and understand. Play to your strengths, don't mindless toss money based on one person's posts on Reddit-- always do your own due diligence. Use DD as a guide and use personal research and experience to drive your decision.
The 4th way of algorithmic trading (Signal Processing)
Algorithmic trading types classified based on development perspectives: 1) Technical Analysis 2) Statistics and Probability 3) Machine Learning I took a different path which is not discussed widely in this subreddit. 4) Signal Processing I'm not a good storyteller, but this is my journey and advices for the beginners First, my background: - Electrical and Electronic engineer, - Software developer (20+ years) - Trader (5+ years) - Algorithmic trader (3+ years)
How I Found The Alpha:
Before algorithmic trading, I was somehow profitable tradeinvestor. Like most of you, when I began to algorithmic trading, I tried to find magic combination of technical indicators and parameters. Also I threw OHLCV and indicators data into the RNN for prediction. I saw that, even very simple strategies like famous moving average crossover is profitable under right market conditions with correct parameters. But you must watch it carefully and if you fell it is not working anymore, you must shut it down. It means you must be experienced trader to take care of your algorithm. I am a fulltime software developer, algorithmic trading was my side project also it became my hobby. I tried to learn everything about this industry. I watched and listened hundreds of hours of podcasts and videos in all my free time like commuting from home to work. These are the most useful to me: - Chat with traders: https://www.youtube.com/channel/UCdnzT5Tl6pAkATOiDsPhqcg - Top traders unplugged: https://www.youtube.com/usetoptraderslive - Ukspreadbetting: https://www.youtube.com/channel/UCnKPQUoCRb1Vu-qWwWituGQ Also I read plenty of academic papers, blog posts and this subreddit for inspiration. Inspiration came from my field, electronics. I will not give you much detail about it but I have developed a novel signal processing technique. It is a fast and natural technique which I couldn’t find any article or paper which mention this method. It can transform any interval price data into meaningful, tradable form. The best part is, it doesn't require any parameter and it adapts to changing market conditions intrinsically. These are the concepts that inspire me: - Information Theory: https://en.wikipedia.org/wiki/Information_theory - Signal Processing: https://en.wikipedia.org/wiki/Signal_processing - ADC: https://en.wikipedia.org/wiki/Analog-to-digital_converter
Output of the process can be used to develop endless type of profitable strategies. I made some money with different momentum based strategies while thinking about how I can use this technique more efficiently. I like to combine different fields. I think trading and life itself have many things in common. So beside general trading concepts, I think that I can try to implement concepts of the life. Also because of the parameterless design, it's more like a decision making process than an optimization problem. I searched proverbs and advices for better decision making. I handled them one by one and thought how I could implement them in a unified strategy while preserving the parameterless design. In time, this process was significantly improved stability and reliability while it was evolving from momentum to mean reversion. These are some proverbs which I use them at various aspects of the algorithm:
- “The bamboo that bends is stronger than the oak that resists.” (Japanese proverb) - "When the rainwater rises and descends down to where you want to cross, wait until it settles." (Sun-Tzu) - "If you do not expect the unexpected you will not find it, for it is not to be reached by search or trail" (Heraclitus)
If you wonder how I implement them in the code, think about the last one; how do you define the unexpected, how to wait for it and how to prepare your algorithm to generate profit. By the way, I strongly recommend: The Art of War (Sun-Tzu)
I have plenty of ideas waiting to be tested and problems that need to be solved. Nevertheless these are the some of the backtest results, for the time being: Crypto: - Market fee and spread are considered, slippage is not. - For multiple assets testing; Survivorship bias was attempted to be eliminated using historical market rank of the assets. Data is acquired from coinmarketcap.com weekly report. ETH / BTC BNB / BTC Binance Historical Top 100 / BTC Other Markets: My main focus is crypto trading. But all the improvements are cross checked in different markets and intervals and validated empirically and logically. It can’t beat every asset and every interval but it tends to work profitably across them. https://preview.redd.it/l865fw6mjfd51.png?width=900&format=png&auto=webp&s=ff217d35637b41e26db8d7cfc3df14c3fb7ec14e Live: The algorithm is running live for over 1.5 years with evolving strategies I mention before. The last one is running for months.
Warnings and Advices:
- Bugs: A few months ago, before bedtime, I released new version for fixing small cosmetic bug and gone to sleep. When I woke up, I saw that nearly 40% of my account wiped out in a few hours. Instead of live settings, I published test settings. It was very painful. I have been coding since childhood, so everyone must be careful. I recommend, implement hard limit for stopping the algorithm. - Fully Automatic Strategy: Finding an edge is not enough. If you need fully automated trading system, you need a portfolio manager (a lot of research is going on at this field) and especially an asset selector mechanism which is maybe more important than the edge itself. If your algorithm is not be able to select which assets to trade, you must select manually. It's not an easy task and it's prone to error. I was very lucky with that: A mechanism already contained in the algorithm was used to rank and select the assets based on their momentums. - Fee-Spread: Because of the market fee and spread, trading is a negative sum game. Do not ignore it when backtesting your algorithm. - Slippage: It's really a problem for low volume assets like penny stocks and lower market cap crypto currencies. Stay away from them or play with small capital or find a way to determine how much money you can use. - Latency: Don’t think it's a HFT only problem. If your algorithm synchronize multiple assets data from the market and run calculations before sending order back to the market, you lose significant amount of time. This usually causes losses that you have not considered before, especially in a volatile environment. Also if you want to develop realtime strategy, you must seriously consider what you will do in downtime. - Datasource: This is the most important part for preparation before developing you strategy. If you don’t have good, reliable data; you cannot develop a good strategy. For free data for various market; I suggest investing.com, but you should consider that volume data is not provided. For crypto, all of the exchanges provide their real data for any asset and any interval, you can use them freely. Also you can buy data , especially if you want intraday data, but I can't suggest any because I never tested them. - Biases: Before developing algorithm, please take a look at and understand the common biases like: Survivorship bias, Look-ahead bias, Time period bias. Or you can be sure that you will face them when you go live. - Live trading: When you think your algorithm can make money, don’t wait till perfection. Go live as soon as possible with small capital to wake up from your dreams and face with the facts early. - Psychology: If your education is based on STEM and you don’t have trading experience, it’s not easy in the real world to swallow all those ups and downs that you see in minutes during backtest. It can affect your mood and your life much more than you think. I suggest, work with a professional trader or only invest what you can really afford to lose.
After over 3 years of journey, I have a profitable algorithm that I trust. I was supposed to lie on the beach and drink beer while my algorithm printing money. But I am consistently checking it’s health and I have always things to do like all software development projects. I posted some of the backtest results, but I don’t know are they considered P/L Porn or not. If so, I can remove it. Sorry about mysterious parts of this post. I removed some parts unwillingly before posting, but there is really a thin line between giving away your edge freely (also it means loosing it) and inspiring people to find their own way.
“Non est ad astra mollis e terris via" - Seneca
For those engineers and EE students who are bombing my inbox for guessing what I did; I can not write all of you in private, also I want to explain it publicly. I must say, you are on the wrong way. If I open sourced the signal processing part, probably it doesnt mean anything to you and you can not turn it into a profitable algorithm. I have to clarify that; before I developed the technique, I knew what I am looking for exactly. Signal processing is not magically trading the market, I am trading the market. it's just a tool to do what is in my mind near perfectly. Also proverbs are the way of thinking. I read them and think if it means anything for trading. Lastly watch the Kung Fu Panda :) https://www.youtube.com/watch?v=rHvCQEr_ETk
You've probably been hearing a lot about Bitcoin recently and are wondering what's the big deal? Most of your questions should be answered by the resources below but if you have additional questions feel free to ask them in the comments. It all started with the release of the release of Satoshi Nakamoto's whitepaper however that will probably go over the head of most readers so we recommend the following videos for a good starting point for understanding how bitcoin works and a little about its long term potential:
Limited Supply - There will only ever be 21,000,000 bitcoins created and they are issued in a predictable fashion, you can view the inflation schedule here. Once they are all issued Bitcoin will be truly deflationary. The halving countdown can be found here.
Open source - Bitcoin code is fully auditable. You can read the source code yourself here.
Accountable - The public ledger is transparent, all transactions are seen by everyone.
Decentralized - Bitcoin is globally distributed across thousands of nodes with no single point of failure and as such can't be shut down similar to how Bittorrent works. You can even run a node on a Raspberry Pi.
Censorship resistant - No one can prevent you from interacting with the bitcoin network and no one can censor, alter or block transactions that they disagree with, see Operation Chokepoint.
Push system - There are no chargebacks in bitcoin because only the person who owns the address where the bitcoins reside has the authority to move them.
Low fee scaling - On chain transaction fees depend on network demand and how much priority you wish to assign to the transaction. Most wallets calculate on chain fees automatically but you can view current fees here and mempool activity here. On chain fees may rise occasionally due to network demand, however instant micropayments that do not require confirmations are happening via the Lightning Network, a second layer scaling solution currently rolling out on the Bitcoin mainnet.
Borderless - No country can stop it from going in/out, even in areas currently unserved by traditional banking as the ledger is globally distributed.
Portable - Bitcoins are digital so they are easier to move than cash or gold. They can even be transported by simply memorizing a string of words for wallet recovery (while cool this method is generally not recommended due to potential for insecure key generation by inexperienced users. Hardware wallets are the preferred method for new users due to ease of use and additional security).
Bitcoin.org and BuyBitcoinWorldwide.com are helpful sites for beginners. You can buy or sell any amount of bitcoin (even just a few dollars worth) and there are several easy methods to purchase bitcoin with cash, credit card or bank transfer. Some of the more popular resources are below, also check out the bitcoinity exchange resources for a larger list of options for purchases.
Here is a listing of local ATMs. If you would like your paycheck automatically converted to bitcoin use Bitwage. Note: Bitcoins are valued at whatever market price people are willing to pay for them in balancing act of supply vs demand. Unlike traditional markets, bitcoin markets operate 24 hours per day, 365 days per year. Preev is a useful site that that shows how much various denominations of bitcoin are worth in different currencies. Alternatively you can just Google "1 bitcoin in (your local currency)".
Securing your bitcoins
With bitcoin you can "Be your own bank" and personally secure your bitcoins OR you can use third party companies aka "Bitcoin banks" which will hold the bitcoins for you.
If you prefer to "Be your own bank" and have direct control over your coins without having to use a trusted third party, then you will need to create your own wallet and keep it secure. If you want easy and secure storage without having to learn computer security best practices, then a hardware wallet such as the Trezor, Ledger or ColdCard is recommended. Alternatively there are many software wallet options to choose from here depending on your use case.
If you prefer to let third party "Bitcoin banks" manage your coins, try Gemini but be aware you may not be in control of your private keys in which case you would have to ask permission to access your funds and be exposed to third party risk.
Note: For increased security, use Two Factor Authentication (2FA) everywhere it is offered, including email! 2FA requires a second confirmation code to access your account making it much harder for thieves to gain access. Google Authenticator and Authy are the two most popular 2FA services, download links are below. Make sure you create backups of your 2FA codes.
As mentioned above, Bitcoin is decentralized, which by definition means there is no official website or Twitter handle or spokesperson or CEO. However, all money attracts thieves. This combination unfortunately results in scammers running official sounding names or pretending to be an authority on YouTube or social media. Many scammers throughout the years have claimed to be the inventor of Bitcoin. Websites like bitcoin(dot)com and the btc subreddit are active scams. Almost all altcoins (shitcoins) are marketed heavily with big promises but are really just designed to separate you from your bitcoin. So be careful: any resource, including all linked in this document, may in the future turn evil. Don't trust, verify. Also as they say in our community "Not your keys, not your coins".
Where can I spend bitcoins?
Check out spendabit or bitcoin directory for millions of merchant options. Also you can spend bitcoin anywhere visa is accepted with bitcoin debit cards such as the CashApp card. Some other useful site are listed below.
Mining bitcoins can be a fun learning experience, but be aware that you will most likely operate at a loss. Newcomers are often advised to stay away from mining unless they are only interested in it as a hobby similar to folding at home. If you want to learn more about mining you can read more here. Still have mining questions? The crew at /BitcoinMining would be happy to help you out. If you want to contribute to the bitcoin network by hosting the blockchain and propagating transactions you can run a full node using this setup guide. If you would prefer to keep it simple there are several good options. You can view the global node distribution here.
Just like any other form of money, you can also earn bitcoins by being paid to do a job.
You can also earn bitcoins by participating as a market maker on JoinMarket by allowing users to perform CoinJoin transactions with your bitcoins for a small fee (requires you to already have some bitcoins.
The following is a short list of ongoing projects that might be worth taking a look at if you are interested in current development in the bitcoin space.
One Bitcoin is quite large (hundreds of £/$/€) so people often deal in smaller units. The most common subunits are listed below:
one bitcoin is equal to 100 million satoshis
1,000 per bitcoin
used as default unit in recent Electrum wallet releases
1,000,000 per bitcoin
colloquial "slang" term for microbitcoin (μBTC)
100,000,000 per bitcoin
smallest unit in bitcoin, named after the inventor
For example, assuming an arbitrary exchange rate of $10000 for one Bitcoin, a $10 meal would equal:
For more information check out the Bitcoin units wiki. Still have questions? Feel free to ask in the comments below or stick around for our weekly Mentor Monday thread. If you decide to post a question in /Bitcoin, please use the search bar to see if it has been answered before, and remember to follow the community rules outlined on the sidebar to receive a better response. The mods are busy helping manage our community so please do not message them unless you notice problems with the functionality of the subreddit. Note: This is a community created FAQ. If you notice anything missing from the FAQ or that requires clarification you can edit it here and it will be included in the next revision pending approval. Welcome to the Bitcoin community and the new decentralized economy!
Congrats Apex community 1000% in $ and 650% in Neo value since end of Feb
When the bears were making the crypto community suffer, the APEX team kept working quietly but hard. It now only took a short period to climb out of the long bear market and APEX is just like other projects alive and kicking again. Community and admin team are very active and many old time investors are returning and buying back tokens again. There are many positive things on the horizon for Apex, just to name a few:
Corporate buyback starting this month and will continue for at least a year, tens millions of tokens
A new Market Maker
A new trading pair on lbank USD/CPX
New exchange(s) around token swap
Mainnet release before October
Finally some great communication from the team with every other week at least a tech update and very often AMA’s with Jimmy.
And ofcourse many other great milestones to be reached within a short period of time. Combine all this great news with the clientele APEX tech already has and we are in for a major rally. Clients like Spring Airlines, Carbon Treasure and Car BaaS already testing while on the testnet, just to name a few. APEX is the only company in china that has its own blockchain, combined with its own AI products to enable corporate customers to achieve compliance data transactions and machine learning with each other! The project that got so much attention in 2017/2018 has awoken and its still a hidden Gem. People are trying to buy many tokens but not a lot of people are willing to sell. The buyside of the orderbook is full and the sellside is almost empty, combine this together with a new MM plus corporate buyback and there is only one way we can go... and it will be fast! Buckle up, get in while you still can!
Geeq - A serious contender for the most game changing project of 2020.
Recently Posted in the Geeq Trading Channel by a community ambassador. I'd like to officially welcome all the new people to the Geeq community, this is going to be long but I'd just like to say a few words and get it out of the way. I've talked with (username removed) about the discussions of other projects in this channel, we agreed to let it happen until the official launch of the Geeq token as this is a community trading channel, and it isn’t trading as of yet. In a way it’s important to acknowledge the good and bad about the macro outlook of crypto so people can learn from the mistakes together, and we don’t want to discourage thoughtful and informative dialogue. (username removed) and I are community members, whatever we do or say may not be used against the Geeq core as of such so please take our views as just personal opinions, we will have many of them lol. The Why’s? Why did I invest in Geeq? Why did my initial evaluation increase week on week on the marketcap of Geeq?, before I really looked into Geeq I had a sell price point at $1, this was 9 months ago. Why am I now seeing the bigger picture of the potential behemoth Geeq is? Why is the team willing to work for free for all these years? Why is the marketing team doing this for free? They have a very lucrative business before they joined the wing’s of the Geeq protocol (and they still do). Why is it for free? Why is the team decked with the most experience individuals in their fields? Why is Ric Asselstine, the gentlemen that helped build Canada’s biggest software company doing $5 billion revenue a year dedicating so many years on Geeq’s protocol? Why is John P. Conley with all his insane credentials in game theory, mechanism design, mathematical economics, and public theory join the team and do it for free for all these years? Why is Stephanie So, CDO & Founder of Geeq dedicate all these years working for free for Geeq? She was the first to use machine learning on social science data at the National Center for Supercomputing Applications, and recently Dell named her top 4 Influential Women in tech, why is she doing this? Why is Lun-Shin Yuen working on Geeq for free? He was the THIRD engineer working there, they are now worth $77.23 Billion in market share and doing over $6.7 Billion of revenue a year. Why is he here? Why is Eric Ball the former treasurer of Oracle between 2005-2015 involved in Geeq? I looked up how well he managed the $60 billion Larry Ellison had, while he was in charge, Larry peaked to the 4th richest person in the world, Mr Ball was managing his money during then, so what does that say? Why is Ian Smith, a savant and expert in network and OS security and system theory working here for free? He is fluent in 20 coding languages, He helped teach Linux at Nasa, why is this gentlemen here? Why did Dr Yeap reach out to the Geeq team to join as an Advisor? He has dozens of patents under him that run for decade’s long. Why Did Kurt Hoppe, the current Director at Google of Product Management in the Android Automotive sector of Google join as an advisor recently? He is a Board member in the Internet of things consortium, Former Global Head of Innovation at General Motors and former Director at LG Electronics Smart Home & Smart TV, why is he wanting to get involved with Geeq? These are few of my personal opinions. Because they are thinking 100 years ahead, not 1 year ahead. Currently crypto is full of people that are looking to make a quick buck, this isn’t one of them, sure you may make some money but why sell yourself short when you look at the product they have designed, created, protected and about to implement to the world of technology. It’s the best security in software, BFT tolerance is 99%, Sybil, Wealth and Nation-State proof, money cannot determine the control of the protocol, hash power cannot determine the control of the protocol. This is huge, this isn’t the buzz word, no one gets excited over security, but to me it’s the biggest deal in the technology world, from this point on, anyone who builds on this is inherently protected from future law suits (e.g. hacks losing peoples funds) due to potential corruption in chains, what the ledger says, says as John puts it. It cannot be changed for any sort of benefit to an individual or nation-state, This means it’s appealing to IoT applications such as Smart cars (corrupt the data and you could drive a car off the cliff), Telemetry, streaming payments, peer to peer content, literally anything you can think of, this would be the best platform to do it with. Micropayments- 1/100th of a cent, streaming and paying per second can now be done due to this. Anonymous- No point showing everyone’s detail’s such as buying personal goods at a store, no one needs to know this type of data, it’s logical in the way it displays important information and weeds out unnecessary public data. Trustless- Edge Security, you can see as an individual who is playing by the rules or not. AMP- Algorithmic Monetary Policy embedded and coded to reduce the risk of sharp drops/sharp pumps, healthy flow. As stated in the White Paper as an example: “A 25 node network running at 20 TPS allows 630M transactions per year and creates 630GB of chain data. Transactions cost users $.0001 each and this generates a total of $62k revenue per year. Approximately $1.7k in $GEEQ is paid to each node per year for their validation services. Geeq receives $21k in annual revenue from one such geeqchain instance.” (Ref: Section 7.1). This is ONE example, ONE chain, it does not represent ONE company, it could be ONE function of ONE Company, my point is, this is scalable to the billions of IoT devices, this has no limit of data input, If you are using Geeq in a smart city, it will be doing Billions of transactions a day. Just one example. In 2017 there was 7 Billion IoT devices, 2019 the devices reached 26.66 Billion, During 2020 it’s expected to be 31 Billion IoT Devices, this is a growing market as you can tell, this is the future, Geeq will capitalize on this. The list goes on and on and on and on and on and on and on. So since this is a trading channel, this is my opinion as to why this may exceed a lot of peoples price points in the long-term. We have no chart to determine the value of this coin, but what you can value it is on the target market, potential marketshare, the teams credentials, the protocol’s potential, the protection of the protocol (patent). A lot of people coming in here throwing numbers out without any actual real evidence to back it up, everyone has there own personal opinion on prices, which is fine but please make the most educated guess you can with the information you have as of now, now rethink to yourself what Geeq is doing, and why I’m so bullish. This is the beginning, since Geeq is built on decentralization, it’s important we act as a community and treat each other with respect, this is a long journey and it’s just starting, this channel should be used to lean on each other and build a very healthy community because this is a very long road, the Geeq team lead by example by showing how much patience they have, they answer the same questions every 30 minutes, I have not seen a TG group that has done this with such respect to random people ( This was a big reason why I went all in). This has the right mixture to be really something big, and I really hope everyone here does their research on Geeq, I’m not technical but I can see what the vision is, what is my price point? How can you value the most powerful team, protocol and vision in crypto? You can’t put a price on a protocol that will be setting the standard. Website: https://geeq.io One Pager: https://geeq.io/one-page White Paper: https://geeq.io/geeq-white-paper-2/ Tokenomics: https://geeq.io/tokenomics/ Team: https://geeq.io/people/ Token Release Date/Time: https://geeq.io/geeq-token-generation-event/ I hope you enjoyed the read and take a look into the project. I genuinely think this is the biggest project of the year with enormous potential. Look at all the shitcoins that take off and then look at this team and the ambitions and development behind it and it will quickly become clear that this is the one hell of a project.
The Turkey City Lexicon - annotated for 40K by Matt Farrer circa 2004 - and Farrer's analysis of Abnett's eye-ball kicks
I wrote a suggestion on how to create a Space Marine OC (the whole thread is a good reading for aspiring fan authors so I'll link it), and it got me thinking about writing within the 40K setting. Back in the day when Black Library still had their own forum, I saved Matt Farrer's annotation of the Turkey City Lexicon (the original, pre-internet version of TV Tropes). I searched the subreddit for it earlier with no results, so I'll share it again here. — Please note: The Turkey City Lexicon isspecifically, explicitlynon-copyright and isencouragedto be shared/reposted/expanded. Posting it here in its entirety violates no copyright legislation in any country - in fact, Matt Farrer himself asked us to share it with our fellow writers. Hat off to you, Mr Farrer, for your contributions to the 40K lore from a longtime fan. — [Originally posted to Black Library Online, November 2004, by user Matt Farrer] The Turkey City Lexicon (Annotated with some Games Workshop observations) The Turkey City Lexicon is a terminology guide that’s been floating around in one form or another since the late eighties (Google will turn up plenty of hits if you want to see one of the original copies; I got this one from the SFWA website). The Lexicon is deliberately not copyrighted and is intended to be copied at will and passed on to other writers (note that you shouldn’t try this with anything else on the SFWA site, if you go there – there are some great articles but most of them are copyrighted). There’s a tendency for people to look at the Lexicon as a list of “common mistakes” or “things not to do”, which is not entirely correct as I understand its purpose. Certainly seeing a common problem set down pithily can help crystallise that particular example of bad technique, but a couple of the terms in here are complimentary and many others aren’t necessarily fatal problems. As in “you might want to watch out for funny-hat characterisation on page four, although with the narrative voice you use it works well”. What it is meant to be is a useful resource for critiquers, giving you a quick and easy shorthand for a known quantity you’ve observed in writing. In the above example, you don’t need to spend half a paragraph describing a shaky spot in the characterisation, you have a quick term to cover it and save space and time for both of you. The early, simple version of the lexicon by Lewis Shiner was expanded and added to by Bruce Sterling, not, in my opinion, always for the better. There are no real differences in actual content between the two, so for this version I’ve picked whichever version of an entry I thought was better phrased. The GW-specific notes are my own – I’ll add more as I think of them, if I have the time. Discussion of any or all of the entries is of course welcome - it's what I'm posting this for. Anyway, let’s get on with it. The meta-rule: Cherryh's Law No rule should be followed over a cliff. (C.J. Cherryh) MF - There are times when the literary or dramatic effect of breaking any supposed "rule" about writing is going to be worth it, and that includes any and all of the points about writing offered in the Lexicon. Such principles are based on experience that shows that certain approaches work better than others, but getting carried away with imposing a set of rules as though they were holy writ simply turns into an attempt to stamp out creativity and have every writer write exactly alike. Know the principles, understand why they work as they do, but don't wear them like shackles. Part One: Words and Sentences Brenda Starr dialogue Long sections of talk with no physical background or description of the characters. Such dialogue, detached from the story's setting, tends to echo hollowly, as if suspended in mid-air. Named for the American comic-strip in which dialogue balloons were often seen emerging from the Manhattan skyline. "Burly Detective" Syndrome This useful term is taken from SF's cousin-genre, the detective-pulp. The hack writers of the Mike Shayne series showed an odd reluctance to use Shayne's proper name, preferring euphemisms like "the burly detective" or "the red-headed sleuth." This comes from a wrong-headed conviction that the same word should not be used twice in close succession. This is only true of particularly strong and visible words, such as "vertiginous." Better to re-use a simple tag or phrase than to contrive cumbersome methods of avoiding it. Brand Name Fever Use of brand name alone, without accompanying visual detail, to create false verisimilitude. You can stock a future with Hondas and Sonys and IBM's and still have no idea with it looks like. "Call a Rabbit a Smeerp" A cheap technique for false exoticism, in which common elements of the real world are re-named for a fantastic milieu without any real alteration in their basic nature or behavior. "Smeerps" are especially common in fantasy worlds, where people often ride exotic steeds that look and act just like horses. (Attributed to James Blish.) Gingerbread Useless ornament in prose, such as fancy sesquipedalian Latinate words where short clear English ones will do. Novice authors sometimes use "gingerbread" in the hope of disguising faults and conveying an air of refinement. (Attr. Damon Knight) Not Simultaneous The mis-use of the present participle is a common structural sentence-fault for beginning writers. "Putting his key in the door, he leapt up the stairs and got his revolver out of the bureau." Alas, our hero couldn't do this even if his arms were forty feet long. This fault shades into "Ing Disease," the tendency to pepper sentences with words ending in "-ing," a grammatical construction which tends to confuse the proper sequence of events. (Attr. Damon Knight) Pushbutton Words Bogus lyricism like "star," "dance," "dream," "song," "tears" and "poet". Used to evoke a cheap emotional response without engaging the intellect or critical faculties, getting us misty-eyed and tender-hearted without us quite knowing why. Most often found in titles. Roget's Disease The ludicrous overuse of far-fetched adjectives, piled into a festering, fungal, tenebrous, troglodytic, ichorous, leprous, synonymic heap. (Attr. John W. Campbell) "Said" Bookism An artificial verb used to avoid the word "said." "Said" is one of the few invisible words in the English language and is almost impossible to overuse. It is much less distracting than "he retorted," "she inquired," "he ejaculated," and other oddities. The term "said-book" comes from certain pamphlets, containing hundreds of purple-prose synonyms for the word "said," which were sold to aspiring authors from tiny ads in American magazines of the pre-WWII era. Tom Swifty An unseemly compulsion to follow the word "said" with a colourful adverb: "'We'd better hurry,' Tom said swiftly." This was a standard mannerism of the old Tom Swift adventure dime-novels. Good dialogue can stand on its own without a clutter of adverbial props. Part Two: Paragraphs and Prose Structure Bathos A sudden, alarming change in the level of diction. "There will be bloody riots and savage insurrections leading to a violent popular uprising unless the regime starts being lots nicer about stuff." Countersinking Expositional redundancy. "'Let's get out of here,' he said, urging her to leave." Dischism The unwitting intrusion of the author's physical surroundings or mental state into the text of the story. Authors who smoke or drink while writing often drown or choke their characters with an endless supply of booze and cigs. In subtler forms of the Dischism, the characters complain of their confusion and indecision -- when this is actually the author's condition at the moment of writing, not theirs within the story. "Dischism" is named after the critic who diagnosed this syndrome. (Attr. Thomas M. Disch) False Humanity An ailment endemic to genre writing, in which soap-opera elements of purported human interest are stuffed into the story willy-nilly, whether or not they advance the plot or contribute to the point of the story. The actions of such characters convey an itchy sense of irrelevance, for the author has invented their problems out of whole cloth, so as to have something to emote about. False Interiorisation A cheap labour-saving technique in which the author, too lazy to describe the surroundings, afflicts the viewpoint-character with a blindfold, an attack of space-sickness, the urge to play marathon whist-games in the smoking-room, etc. Fuzz An element of motivation the author was too lazy to supply. The word "somehow" is a useful tip-off to fuzzy areas of a story. "Somehow she had forgotten to bring her gun." Hand Waving An attempt to distract the reader with dazzling prose or other verbal fireworks, so as to divert attention from a severe logical flaw. (Attr. Stewart Brand) Laughtrack Characters grandstand and tug the reader's sleeve in an effort to force a specific emotional reaction. They laugh wildly at their own jokes, cry loudly at their own pain, and rob the reader of any real chance of attaining genuine emotion. Show, Don’t Tell A cardinal principle of effective writing. The reader should be allowed to react naturally to the evidence presented in the story, not instructed in how to react by the author. Specific incidents and carefully observed details will render auctorial lectures unnecessary. For instance, instead of telling the reader "She had a bad childhood, an unhappy childhood," a specific incident -- involving, say, a locked closet and two jars of honey -- should be shown. Rigid adherence to show-don't-tell can become absurd. Minor matters are sometimes best gotten out of the way in a swift, straightforward fashion. Signal from Fred A comic form of the "Dischism" in which the author's subconscious, alarmed by the poor quality of the work, makes unwitting critical comments: "This doesn't make sense." "This is really boring." "This sounds like a bad movie." (Attr. Damon Knight) Squid in the Mouth The failure of an author to realize that his/her own weird assumptions and personal in-jokes are simply not shared by the world-at-large. Instead of applauding the wit or insight of the author's remarks, the world-at-large will stare in vague shock and alarm at such a writer, as if he or she had a live squid in the mouth. Since SF writers as a breed are generally quite loony, and in fact make this a stock in trade, "squid in the mouth" doubles as a term of grudging praise, describing the essential, irreducible, divinely unpredictable lunacy of the true SF writer. (Attr. James P Blaylock) Squid on the Mantelpiece Chekhov said that if there are dueling pistols over the mantelpiece in the first act, they should be fired in the third. In other words, a plot element should be deployed in a timely fashion and with proper dramatic emphasis. However, in SF plotting the MacGuffins are often so overwhelming that they cause conventional plot structures to collapse. It's hard to properly dramatize, say, the domestic effects of Dad's bank overdraft when a giant writhing kraken is levelling the city. This mismatch between the conventional dramatic proprieties and SF's extreme, grotesque, or visionary thematics is known as the "squid on the mantelpiece." MF – I’ve heard several versions of the supposed “Chekhov’s Gun” principle, no two of them meaning exactly the same thing. For example, the version I first heard is “If a character produces a gun, then it should be used to shoot someone, or threaten someone, or go off by accident, or fail to fire when it’s needed, and so on. If it does none of these things, then it is superfluous and should be taken out altogether.” That’s a point about narrative tidiness rather than timely deployment of plot elements. White Room Syndrome A clear and common sign of the failure of the author's imagination, most often seen at the beginning of a story, before the setting, background, or characters have gelled. "She awoke in a white room." The 'white room' is a featureless set for which details have yet to be invented -- a failure of invention by the author. The character 'wakes' in order to begin a fresh train of thought -- again, just like the author. This 'white room' opening is generally followed by much earnest pondering of circumstances and useless exposition; all of which can be cut, painlessly. It remains to be seen whether the "white room" cliche' will fade from use now that most authors confront glowing screens rather than blank white paper. Wiring Diagram Fiction A genre ailment related to "False Humanity," "Wiring Diagram Fiction" involves "characters" who show no convincing emotional reactions at all, since they are overwhelmed by the author's fascination with gadgetry or didactic lectures. MF – A trap hard SF often falls into, in my experience. I suppose the related ailment in GW fiction would be “fluff-diagram fiction” (sorry Gav), in which the story is sidelined by the author’s desire to lay out in detail some aspect of his take on the game-universe. You Can't Fire Me, I Quit An attempt to diffuse the reader's incredulity with a pre-emptive strike -- as if by anticipating the reader's objections, the author had somehow answered them. "I would never have believed it, if I hadn't seen it myself!" "It was one of those amazing coincidences that can only take place in real life!" "It's a one-in-a-million chance, but it's so crazy it just might work!" Surprisingly common, especially in SF. (Attr. John Kessel) Part Three: Common Workshop Story Types Adam and Eve Story Nauseatingly common subset of the "Shaggy God Story" in which a terrible apocalypse, spaceship crash, etc., leaves two survivors, man and woman, who turn out to be Adam and Eve, parents of the human race! MF – Not an issue for GW writing for obvious reasons. See Alfred Bester’s “Adam With No Eve” in the brilliant anthology Starburst for a rather good twist on the idea. The Cosy Catastrophe Story in which horrific events are overwhelming the entirety of human civilization, but the action concentrates on a small group of tidy, middle-class, white Anglo-Saxon protagonists. The essence of the cosy catastrophe is despite the supposed devastation the hero actually has a pretty good time (a girl, free suites at the Savoy, fancy cars for the taking) while everyone is dying off. (Attr. Brian Aldiss) Dennis Hopper Syndrome A story based on some arcane bit of science or folklore, which noodles around producing random weirdness. Then a loony character-actor (usually best played by Dennis Hopper) barges into the story and baldly tells the protagonist what's going on by explaining the underlying mystery in a long bug-eyed rant. (Attr. Howard Waldrop) MF - Not unrelated to Roger Ebert's remarks about the Talking Killer device, aka "Before I kill you, Mister Bond..." The killer gets the protagonist at his mercy and then decides to put off killing him so that he can fill the hero in on exactly what's been going on, and bring the reader up to speed at the same time. You know, like I did at the end of Crossfire. Although this is a plot device rather than an actual story type. Deus ex Machina or "God in the Box" Story featuring a miraculous solution to the story's conflict, which comes out of nowhere and renders the struggles of the characters irrelevant. Oh look, the Martians all caught cold and died. The Grubby Apartment Story Writing a little too much about what you know. The penniless writer living in a grubby apartment writes a story about a penniless writer living in a grubby apartment. Stars all his friends. The Jar of Tang "For you see, we are all living in a jar of Tang!" "For you see, I am a dog!" Mainstay of the old Twilight Zone TV show. An entire pointless story contrived so the author can jump out at the end and cry "Fooled you!" For instance, the story takes place in a desert of coarse orange sand surrounded by an impenetrable vitrine barrier; surprise! our heroes are microbes in a jar of Tang powdered orange drink. This is a classic case of the difference between a conceit and an idea. "What if we all lived in a jar of Tang?" is an example of the former; "What if the revolutionaries from the sixties had been allowed to set up their own society?" is an example of the latter. Good SF requires ideas, not conceits. (Attr. Stephen P. Brown) When done with serious intent rather than as a passing conceit, this type of story can be dignified by the term "Concealed Environment." (Attr. Christopher Priest) Just-Like Fallacy SF story which thinly adapts the trappings of a standard pulp adventure setting. The spaceship is "just like" an Atlantic steamer, down to the Scottish engineer in the hold. A colony planet is "just like" Arizona except for two moons in the sky. "Space Westerns" and futuristic hard-boiled detective stories have been especially common versions. MF – Then again, one of the fun things about the GW settings – the 40Kverse more than the Warhammer world, it seems to me – is the way you can rip all kinds of stuff off and stuff it in there to do a 41st-millennium tribute to it. Not necessarily a bad thing, providing you don’t end up in Bat Durston territory (more about him another time). [From another post:] In case you are not familiar with the term, a Bat Durston refers derogatorily to a science fiction story which is little more than a traditional western using sf settings and icons. Taking the comparison to alternate history, the better stories in this genre should create the story’s world for some reason other than merely creating a nice setting for an adventure. The Kitchen-Sink Story A story overwhelmed by the inclusion of any and every new idea that occurs to the author in the process of writing it. (Attr. Damon Knight) The Motherhood Statement SF story which posits some profoundly unsettling threat to the human condition, explores the implications briefly, then hastily retreats to affirm the conventional social and humanistic pieties, ie apple pie and motherhood. Greg Egan once stated that the secret of truly effective SF was to deliberately "burn the motherhood statement." (Attr. Greg Egan) MF - He wasn’t kidding, either. Greg Egan writes some of the most powerful and disturbing hard SF I’ve read, precisely because he’s not afraid to back away from the full implications of the science and technology he writes about. I think that 40K writing is vulnerable to this to a certain degree: I’ve seen quite a few stories that dip a toe into the grim, violent, insane world of the 41st Millennium, stay there for a moment but quickly falls back into “but the Imperium is actually an OK place and lots of people there are nice and happy just like us”. Discussion on this welcome. The "Poor Me" Story Autobiographical piece in which the male viewpoint character complains that he is ugly and can't get laid. (Attr. Kate Wilhelm) Re-Inventing the Wheel A novice author goes to enormous lengths to create a situation already tiresomely familiar to the experienced reader. Reinventing the Wheel was traditionally typical of mainstream writers venturing into SF without actually reading any of the existing stuff first (because it's all obviously crap anyway). Thus you get endless explanations of, say, how an atomic war might get started by accident, and so on. It is now often seen in writers who lack experience in genre history because they were attracted to written SF via movies, television, role-playing games, comics or computer gaming. MF – Not that coming into the genre that way is a bad thing per se, but when a writer hasn’t had much exposure to written specfic in this way it usually shows, and not in a good way. To quote Terry Pratchett, you should be importing, not recycling. The Rembrandt Comic Book A story in which incredible craftsmanship has been lavished on a theme or idea which is basically trivial or subliterary, and which simply cannot bear the weight. The Shaggy God Story A piece which mechanically adopts a Biblical or other mythological tale and provides flat science-fictional "explanations" for the theological events. (Attr. Michael Moorcock) MF – Although he wrote them himself: arguably his finest and most powerful story, called “Behold The Man”, does this for the life of Jesus. I remember it disturbed me when I read it, and I’m not even religious. The Slipstream Story Non-SF story which is so ontologically distorted or related in such a bizarrely non-realist fashion that it cannot pass muster as commercial mainstream fiction and therefore seeks shelter in the SF or fantasy genre. Postmodern critique and technique are particularly fruitful in creating slipstream stories. The Steam-Grommet Factory Didactic SF story which consists entirely of a guided tour of a large and elaborate gimmick. A common technique of SF utopias and dystopias. (Attr. Gardner Dozois) MF – See the opening of Huxley’s Brave New World for an example of this done effectively. The Tabloid Weird Story produced by a confusion of SF and Fantasy tropes -- or rather, by a confusion of basic world-views. Tabloid Weird is usually produced by the author's own inability to distinguish between a rational, Newtonian-Einsteinian, cause-and- effect universe and an irrational, supernatural, fantastic universe. Either the FBI is hunting the escaped mutant from the genetics lab, or the drill-bit has bored straight into Hell -- but not both at once in the very same piece of fiction. Even fantasy worlds need an internal consistency of sorts, so that a Sasquatch Deal-with-the-Devil story is also "Tabloid Weird." Sasquatch crypto-zoology and Christian folk superstition simply don't mix well, even for comic effect. (Attr. Howard Waldrop) MF – I’m not as convinced as the Lexicon that these two genres are utterly incompatible. Well, obviously not, since I work in a setting which combines them without hesitation. Which isn’t to say that the combination doesn’t need to be handled delicately, since those aforementioned different mindsets lead to different storytelling conventions as well as different world views. The Whistling Dog A story related in such an elaborate, arcane, or convoluted manner that it impresses by its sheer narrative ingenuity, but which, as a story, is basically not worth the candle. Like the whistling dog, it's astonishing that the thing can whistle -- but it doesn't actually whistle very well. (Attr. Harlan Ellison) Part Four: Plots Abbess Phone Home Takes its name from a mainstream story about a medieval cloister which was sold as SF because of the serendipitous arrival of a UFO at the end. By extension, any mainstream story with a gratuitous SF or fantasy element tacked on so it could be sold. And plot Picaresque plot in which this happens, and then that happens, and then something else happens, and it all adds up to nothing in particular. Bogus Alternatives List of actions a character could have taken, but didn't. Frequently includes all the reasons why, as the author stops the action dead to work out complicated plot problems at the reader's expense. "If I'd gone along with the cops they would have found the gun in my purse. And anyway, I didn't want to spend the night in jail. I suppose I could have just run instead of stealing their car, but then..." etc. Best dispensed with entirely. Card Tricks in the Dark Elaborately contrived plot which arrives at (a) the punchline of a private joke nobody else will get, or (b) the display of some bit of learned trivia only the author is interested in. This stunt may be intensely ingenious, and very gratifying to the author, but it serves no visible fictional purpose. (Attr. Tim Powers) Idiot Plot A plot which functions only because all the characters involved are idiots. They behave in a way that suits the author's convenience, rather than through any rational motivation of their own. (Attr. James Blish) Kudzu plot Plot which weaves and curls and writhes in weedy organic profusion, smothering everything in its path. Plot Coupons The basic building blocks of the quest-type fantasy plot. The "hero" collects sufficient plot coupons (magic sword, magic book, magic cat) to send off to the author for the ending. Note that "the author" can be substituted for "the Gods" in such a work: "The Gods decreed he would pursue this quest." Right, mate. The author decreed he would pursue this quest until sufficient pages were filled to procure an advance. (Dave Langford) MF - Nick Lowe expands on the idea in an excellent article atwww.ansible.co.uk/Ansible/plotdev.html. Cheers to Bill King for the link. Second-order Idiot Plot A plot involving an entire invented SF society which functions only because every single person in it is necessarily an idiot. (Attr. Damon Knight) MF – The assertion that this applies to the 40K Imperium is not a new one. Floor’s open… Part Five: Background "As You Know Bob" A pernicious form of info-dump through dialogue, in which characters tell each other things they already know, for the sake of getting the reader up-to-speed. This very common technique is also known as "Rod and Don dialogue" (attr. Damon Knight) or "maid and butler dialogue" (attr Algis Budrys). The Edges of Ideas The solution to the "Info-Dump" problem (how to fill in the background). The theory is that, for example, the mechanics of an interstellar drive (the centre of the idea) are not important. What matters is the impact on your characters: they can get to other planets in a few months, and, oh yeah, it gives them hallucinations about past lives. Or, more radically: the physics of TV transmission is the center of an idea; on the edges of it we find people turning into couch potatoes because they no longer have to leave home for entertainment. Or, more bluntly: we don't need info dump at all. We just need a clear picture of how people's lives have been affected by their background. Eyeball Kick That perfect, telling detail that creates an instant visual image. The ideal of certain postmodern schools of SF is to achieve a "crammed prose" full of "eyeball kicks." (Rudy Rucker) MF - See the other thread. Frontloading Piling too much exposition into the beginning of the story, so that it becomes so dense and dry that it is almost impossible to read. (Attr. Connie Willis) Infodump Large chunk of indigestible expository matter intended to explain the background situation. Info-dumps can be covert, as in fake newspaper or "Encyclopedia Galactica" articles, or overt, in which all action stops as the author assumes center stage and lectures. Info-dumps are also known as "expository lumps." The use of brief, deft, inoffensive info-dumps is known as "kuttnering," after Henry Kuttner. When information is worked unobtrusively into the story's basic structure, this is known as "heinleining." "I've suffered for my Art" (and now it's your turn) A form of info-dump in which the author inflicts upon the reader hard-won, but irrelevant bits of data acquired while researching the story. As Algis Budrys once pointed out, homework exists to make the difficult look easy. Nowhere Nowhen Story Putting too little exposition into the story's beginning, so that the story, while physically readable, seems to take place in a vacuum and fails to engage any readerly interest. (Attr. L. Sprague de Camp) Ontological riff Passage in an SF story which suggests that our deepest and most basic convictions about the nature of reality, space-time, or consciousness have been violated, technologically transformed, or at least rendered thoroughly dubious. The works of H. P. Lovecraft, Barrington Bayley, and Philip K Dick abound in "ontological riffs." Space Western The most pernicious suite of "Used Furniture". The grizzled space captain swaggering into the spacer bar and slugging down a Jovian brandy. Stapledon Name assigned to the voice which takes centre stage to lecture. Actually a common noun, as: "You have a Stapledon come on to answer this problem instead of showing the characters resolve it." Used Furniture Use of a background out of Central Casting. Rather than invent a background and have to explain it, or risk re-inventing the wheel, let's just steal one. We'll set it in the Star Trek Universe, only we'll call it the Empire instead of the Federation. Part Six: Character and Viewpoint Funny-hat characterization A character distinguished by a single identifying tag, such as odd headgear, a limp, a lisp, a parrot on his shoulder, etc. MF – This can work if done deftly and with minor characters. Stephen King excels at it, and Ed McBain is pretty good too. Mary Sue A ridiculously perfect and idealised character, moving through a story which serves no other purpose than demonstrating how ridiculously perfect and idealised Mary Sue is. None of the other characters have anything to do other than rave about Mary Sue's wonderfulness; challenges and obstacles exist only for Mary Sue to solve effortlessly to admiring gasps from everyone else. Also known as "avatars" or "self-insertion", since the most common Mary Sues are thinly-disguised versions of the author and are more about wish-fulfiment fantasies than conventional storytelling. Endemic to fanfic; the term apparently originates from an early and infamous example in an old Star Trek fanzine. MF - There are lots of definitions and examples of Mary Sue, although the term as it's used here isn't really attributable to one author any more. The definition supplied here owes much to Teresa Nielsen Hayden's rather good one athttp://nielsenhayden.com/makinglight/archives/004188.html. GW fanfics and homebrew backgrounds aren't immune either - you can find them pretty easily once you know the signs. The twist is that the Mary Sue is often a Guard regiment, Space Marine Chapter, Eldar Craftworld or an entire galactic state. Common warning signs: "The Mary Sue Regiment fought so ferociously in the Battle of Sueville that even the [famous Space Marine Chapter] were awe-struck that unaugmented humans could fight so hard, and their Chapter Master officially declared the Mary Sue regiment the equals of Space Marines". "Inquisitor Mary Sue has demonstrated such amazing ability that the High Lords have personally ordered that nobody is allowed to stand in her way or question her actions". "Now that it has declared independence from the Imperium the Mary Sue Republic has become a haven of enlightenment and progress, where technology is being developed at an exponential rate with no aura of superstitious mysticism, painless and fully-effective techniques to protect psykers from daemonic attack have been developed, alien races of all kinds are putting aside their differences and living contentedly side by side, and where every Imperial who sees what's going on immediately defects once they see how wonderful and free life among the Mary Sues is". I've since found out that even the original "Ensign Mary Sue" in that old seventies fanfic was a satire on the trope, so clearly it was already a fiction cliche by then. Mrs. Brown The small, downtrodden, eminently common, everyday little person who nevertheless encapsulates something vital and important about the human condition. "Mrs. Brown" is a rare personage in the SF genre, being generally overshadowed by swaggering submyth types made of the finest gold-plated cardboard. In a famous essay, "Science Fiction and Mrs. Brown," Ursula K. Le Guin decried Mrs. Brown's absence from the SF field. (Attr: Virginia Woolf) ...stamped on their forehead The story lets a character get away with something illogical or impossible because they have "hero" (or "villain", "sidekick", disposable underling", or whatever) stamped on their foreheads. There's nothing wrong with heroes triumphing against the odds or villains being brought low through their own flaws, but those consequences need to come about because of the characters and their actions rather than despite them. Adapted from Aaron Allston's roleplayers' glossary from a few years ago, which included "He's got 'PC' [player character] stamped on his forehead" as an all-purpose excuse for why characters unquestioningly accepted or trusted one anothers' actions while treating non-player characters differently. (Aaron Allston.) MF - This was partly prompted by the "script immunity" and "Hollywood Shield" ideas in the discussion thread, although the scene I had in mind for it was actually in Walking Tall, where the main character is manifestly guilty of all manner of assaults and property destruction but is acquitted in court when he makes a sentimental speech about down-home values. It doesn't even resemble making a legal case for his innocence, but he gets let off because he's got "hero" stamped on his forehead. Submyth Classic character-types in SF which aspire to the condition of archetype but don't quite make it, such as the mad scientist, the crazed supercomputer, the emotionless super-rational alien, the vindictive mutant child, etc. (Attr. Ursula K. Le Guin) MF – You can pick the GWverse submyths for yourselves, I’m sure. Viewpoint glitch The author loses track of point-of-view, switches point-of-view for no good reason, or relates something that the viewpoint character could not possibly know. Part Seven: Miscellaneous AM/FM Engineer's term distinguishing the inevitable clunky real-world faultiness of "Actual Machines" from the power-fantasy techno-dreams of "Fething Magic." MF – Except the original Lexicon didn’t say “fething”. :grinning_emoticon: Well worth remembering for 40K and Necromunda fiction, which deliberately shies away from the sleek, clean, super-reliable dream-tech of settings like Star Trek. Consensus Reality Useful term for the purported world in which the majority of modern sane people generally agree that they live -- as opposed to the worlds of, say, Forteans, semioticians or quantum physicists. Intellectual sexiness The intoxicating glamor of a novel scientific idea, as distinguished from any actual intellectual merit that it may someday prove to possess. The Ol' Baloney Factory "Science Fiction" as a publishing and promotional entity in the world of commerce. — Additional suggestions from other forum members: User Chiron: Script Immunity The tendency of lynchpin characters to be blatantly immune to harm, despite the fact that they consistently place themselves in situations that they cannot reasonably be expected to survive. User Vortemir: Hollywood Shield / Imperial Stormtrooper Syndrome Bad Guys will never be able to hit essential characters no matter what they're armed with or how hard they try. — [Originally posted to Black Library Online, October 2004, by user Matt Farrer] A term from the Turkey City Lexicon that might be useful here is the "eyeball kick", Rudy Rucker's term for that perfectly-turned descriptive phrase that creates an instant, telling visual image for the reader. An example that springs to mind from the opening of Necropolis:
After a minute or so, raid-sirens in the central district also began keening. The pattern was picked up by manufactory hooters and mill whistles all through the lower hive, and in the mill whistles and outer habs across the river too. Even the great ceremonial horns on the top of the Ecclesiarchy Basilica started to sound.Vervunhive was screaming with every one of its voices.
That last line provides the eyeball kick. Some other examples that spring to mind: "[he] screamed out two mouthfuls of silent spun glass" (Stephen King); "the sky above Chiba City was the colour of a television tuned to a blank band" (William Gibson); "a great moist loaf of a body... features as bunched as kissed fingertips" (E. Annie Proulx); "[after walking through snow] my feet, in wet socks, slowly turned to marble and fell off" (Donald Westlake). I don't know if there's a way you can break down an eyeball kick to pick apart the technique, since its whole impact comes from lateral thinking and the effect of an incongruous image that nevertheless fits exactly with what you're describing. It's an imagination thing rather than a technique thing. However, the paragraph from Necropolis that I used above is also a very good example of how to maximise the effect of a good piece of description, and worth having a closer look at. Firstly, the rest of the paragraph has been describing the machinery that makes the sound, and doing so in fairly neutral, inorganic terms: "keening", at the start of the para, is about as close as we get to an emotive word. The rest is a pretty calm description about how a series of klaxons and horns are going off. That increases the wrench when we suddenly switch gears into words that you'd use to describe a living being in agony: "screaming with every one of its voices", which gives weight to the sense of foreboding that dominates the early pages. This is reinforced further by the way that the previous sentences tend to be longer, with more connecting commas and lots of adjectives to slow their rhythm and give a more discursive feel, while the last sentence is a simple, flat declarative. Using the rhythm of words and sentences for a setup and payoff like that is a very good way of driving home a piece of exposition or description, and it's something that Dan uses quite a bit. Secondly, look at the way that the passage, which at first blush is about the sounds of the sirens, actually helps build a visual image as well. We've been going through all the various parts and districts of Vervunhive, watching as different kinds of buildings in different areas go off. Look at how the mental "camera" moves down the lower hive, then down the river, then up to the top of the Basilica. Then in the last sentence we get an eyeball kick that describes the whole of Vervunhive as a single entity: the effect is like pulling back sharply from an individual scene or building and seeing the whole Hive at once. And that concludes the main piece of visual scene-setting at the opening: notice that in the next line Dan can start in on conversations between individual characters around the Hive because the major scene has been laid out. The broad point to take away from this is that each piece of text should work on as many levels as possible, and even a short passage like that one can be far more than the sum of its parts. I suspect that the reason a lot of bad fiction (including, I am sorry to say, a lot of fanfic I've seen) seems so flat and plodding is that each sentence is put down to do one thing: make a statement, provide a description or what have you. But there's no depth to the prose, no interaction between them to create any rhythm, or momentum, or startling switch in imagery. It's like a song from your favourite band, with each element (vocals, percussion, each instrument) separated and played end to end. It sounds so much better when they're all working together. — That's it. Got any suggestions for new 40K-specific tropes to add?
A lot of exciting possibilities may be available soon thanks to Oikos. I immediately thought about decentralised trading bots managing investment portfolios 100% algorithmically, regularly trading a basket of cryptos and synthetic ones like sBTC and sETH. Even connecting to a machine learning API via Oracle for some really advanced strategies. Anyone got any suggestions for dapps or features which are missing from TRON that would bring a lot of users? I know a lot of DeFi is missing but I believe it will come when developers realise the advantages TRON has over its competitors.
How many people really understand what they’re buying, especially when it comes to highly specialized hardware companies? Most NVidia investors seem to be relying on a vague idea of how the company should thrive “in the future”, as their GPUs are ostensibly used for Artificial Intelligence, Cloud, holograms, etc. Having been shocked by how this company is represented in the media, I decided to lay out how this business works, doing my part to fight for reality. With what’s been going on in markets, I don’t like my chances but here goes: Let’s start with… How does NVDA make money? NVDA is in the business of semiconductor design. As a simplified image in your head, you can imagine this as designing very detailed and elaborate posters. Their engineers create circuit patterns for printing onto semiconductor wafers. NVDA then pays a semiconductor foundry (the printer – generally TSMC) to create chips with those patterns on them. Simply put, NVDA’s profits represent the difference between the price at which they can sell those chips, less the cost of printing, and less the cost of paying their engineers to design them. Notably, after the foundry prints the chips, NVDA also has to pay (I say pay, but really it is more like “sell at a discount to”) their “add-in board” (AIB) partners to stick the chips onto printed circuit boards (what you might imagine as green things with a bunch of capacitors on them). That leads to the final form in which buyers experience the GPU. What is a GPU? NVDA designs chips called GPUs (Graphical Processing Units). Initially, GPUs were used for the rapid processing and creation of images, but their use cases have expanded over time. You may be familiar with the CPU (Central Processing Unit). CPUs sit at the core of a computer system, doing most of the calculation, taking orders from the operating system (e.g. Windows, Linux), etc. AMD and Intel make CPUs. GPUs assist the CPU with certain tasks. You can think of the CPU as having a few giant very powerful engines. The GPU has a lot of small much less powerful engines. Sometimes you have to do a lot of really simple tasks that don’t require powerful engines to complete. Here, the act of engaging the powerful engines is a waste of time, as you end up spending most of your time revving them up and revving them down. In that scenario, it helps the CPU to hand that task over to the GPU in order to “accelerate” the completion of the task. The GPU only revs up a small engine for each task, and is able to rev up all the small engines simultaneously to knock out a large number of these simple tasks at the same time. Remember the GPU has lots of engines. The GPU also has an edge in interfacing a lot with memory but let’s not get too technical. Who uses NVDA’s GPUs? There are two main broad end markets for NVDA’s GPUs – Gaming and Professional. Let’s dig into each one: The Gaming Market: A Bit of Ancient History (Skip if impatient) GPUs were first heavily used for gaming in arcades. They then made their way to consoles, and finally PCs. NVDA started out in the PC phase of GPU gaming usage. They weren’t the first company in the space, but they made several good moves that ultimately led to a very strong market position. Firstly, they focused on selling into OEMs – guys like the equivalent of today’s DELL/HP/Lenovo – , which allowed a small company to get access to a big market without having to create a lot of relationships. Secondly, they focused on the design aspect of the GPU, and relied on their Asian supply chain to print the chip, to package the chip and to install in on a printed circuit board – the Asian supply chain ended up being the best in semis. But the insight that really let NVDA dominate was noticing that some GPU manufacturers were focusing on keeping hardware-accelerated Transform and Lighting as a Professional GPU feature. As a start-up, with no professional GPU business to disrupt, NVidia decided their best ticket into the big leagues was blowing up the market by including this professional grade feature into their gaming product. It worked – and this was a real masterstroke – the visual and performance improvements were extraordinary. 3DFX, the initial leader in PC gaming GPUs, was vanquished, and importantly it happened when funding markets shut down with the tech bubble bursting and after 3DFX made some large ill-advised acquisitions. Consequently 3DFX, went from hero to zero, and NVDA bought them for a pittance out of bankruptcy, acquiring the best IP portfolio in the industry. Some more Modern History This is what NVDA’s pure gaming card revenue looks like over time – NVDA only really broke these out in 2005 (note by pure, this means ex-Tegra revenues): 📷 https://hyperinflation2020.tumblr.com/private/618394577731223552/tumblr_Ikb8g9Cu9sxh2ERno So what is the history here? Well, back in the late 90s when GPUs were first invented, they were required to play any 3D game. As discussed in the early history above, NVDA landed a hit product to start with early and got a strong burst of growth: revenues of 160M in 1998 went to 1900M in 2002. But then NVDA ran into strong competition from ATI (later purchased and currently owned by AMD). While NVDA’s sales struggled to stay flat from 2002 to 2004, ATI’s doubled from 1Bn to 2Bn. NVDA’s next major win came in 2006, with the 8000 series. ATI was late with a competing product, and NVDA’s sales skyrocketed – as can be seen in the graph above. With ATI being acquired by AMD they were unfocused for some time, and NVDA was able to keep their lead for an extended period. Sales slowed in 2008/2009 but that was due to the GFC – people don’t buy expensive GPU hardware in recessions. And then we got to 2010 and the tide changed. Growth in desktop PCs ended. Here is a chart from Statista: 📷https://hyperinflation2020.tumblr.com/private/618394674172919808/tumblr_OgCnNwTyqhMhAE9r9 This resulted in two negative secular trends for Nvidia. Firstly, with the decline in popularity of desktop PCs, growth in gaming GPUs faded as well (below is a chart from Jon Peddie). Note that NVDA sells discrete GPUs, aka DT (Desktop) Discrete. Integrated GPUs are mainly made by Intel (these sit on the motherboard or with the CPU). 📷 https://hyperinflation2020.tumblr.com/private/618394688079200256/tumblr_rTtKwOlHPIVUj8e7h You can see from the chart above that discrete desktop GPU sales are fading faster than integrated GPU sales. This is the other secular trend hurting NVDA’s gaming business. Integrated GPUs are getting better and better, taking over a wider range of tasks that were previously the domain of the discrete GPU. Surprisingly, the most popular eSports game of recent times – Fortnite – only requires Intel HD 4000 graphics – an Integrated GPU from 2012! So at this point you might go back to NVDA’s gaming sales, and ask the question: What happened in 2015? How is NVDA overcoming these secular trends? The answer consists of a few parts.Firstly, AMD dropped the ball in 2015. As you can see in this chart, sourced from 3DCenter, AMD market share was halved in 2015, due to a particularly poor product line-up: 📷 https://hyperinflation2020.tumblr.com/private/618394753459994624/tumblr_J7vRw9y0QxMlfm6Xd Following this, NVDA came out with Pascal in 2016 – a very powerful offering in the mid to high end part of the GPU market. At the same time, AMD was focusing on rebuilding and had no compelling mid or high end offerings. AMD mainly focused on maintaining scale in the very low end. Following that came 2017 and 2018: AMD’s offering was still very poor at the time, but cryptomining drove demand for GPUs to new levels, and AMD’s GPUs were more compelling from a price-performance standpoint for crypto mining initially, perversely leading to AMD gaining share. NVDA quickly remedied that by improving their drivers to better mine crypto, regaining their relative positioning, and profiting in a big way from the crypto boom. Supply that was calibrated to meet gaming demand collided with cryptomining demand and Average Selling Prices of GPUs shot through the roof. Cryptominers bought top of the line GPUs aggressively. A good way to see changes in crypto demand for GPUs is the mining profitability of Ethereum: 📷 https://hyperinflation2020.tumblr.com/private/618394769378443264/tumblr_cmBtR9gm8T2NI9jmQ This leads us to where we are today. 2019 saw gaming revenues drop for NVDA. Where are they likely to head? The secular trends of falling desktop sales along with falling discrete GPU sales have reasserted themselves, as per the Jon Peddie research above. Cryptomining profitability has collapsed. AMD has come out with a new architecture, NAVI, and the 5700XT – the first Iteration, competes effectively with NVDA in the mid-high end space on a price/performance basis. This is the first real competition from AMD since 2014. NVDA can see all these trends, and they tried to respond. Firstly, with volumes clearly declining, and likely with a glut of second-hand GPUs that can make their way to gamers over time from the crypto space, NVDA decided to pursue a price over volume strategy. They released their most expensive set of GPUs by far in the latest Turing series. They added a new feature, Ray Tracing, by leveraging the Tensor Cores they had created for Professional uses, hoping to use that as justification for higher prices (more on this in the section on Professional GPUs). Unfortunately for NVDA, gamers have responded quite poorly to Ray Tracing – it caused performance issues, had poor support, poor adoption, and the visual improvements in most cases are not particularly noticeable or relevant. The last recession led to gaming revenues falling 30%, despite NVDA being in a very strong position at the time vis-à-vis AMD – this time around their position is quickly slipping and it appears that the recession is going to be bigger. Additionally, the shift away from discrete GPUs in gaming continues. To make matters worse for NVDA, AMD won the slots in both the New Xbox and the New PlayStation, coming out later this year. The performance of just the AMD GPU in those consoles looks to be competitive with NVidia products that currently retail for more than the entire console is likely to cost. Consider that usually you have to pair that NVidia GPU with a bunch of other expensive hardware. The pricing and margin impact of this console cycle on NVDA is likely to be very substantially negative. It would be prudent to assume a greater than 30% fall in gaming revenues from the very elevated 2019 levels, with likely secular decline to follow. The Professional Market: A Bit of Ancient History (again, skip if impatient) As it turns out, graphical accelerators were first used in the Professional market, long before they were employed for Gaming purposes. The big leader in the space was a company called Silicon Graphics, who sold workstations with custom silicon optimised for graphical processing. Their sales were only $25Mn in 1985, but by 1997 they were doing 3.6Bn in revenue – truly exponential growth. Unfortunately for them, from that point on, discrete GPUs took over, and their highly engineered, customised workstations looked exorbitantly expensive in comparison. Sales sank to 500mn by 2006 and, with no profits in sight, they ended up filing for bankruptcy in 2009. Competition is harsh in the semiconductor industry. Initially, the Professional market centred on visualisation and design, but it has changed over time. There were a lot of players and lot of nuance, but I am going to focus on more recent times, as they are more relevant to NVidia. Some More Modern History NVDA’s Professional business started after its gaming business, but we don’t have revenue disclosures that show exactly when it became relevant. This is what we do have – going back to 2005: 📷 https://hyperinflation2020.tumblr.com/private/618394785029472256/tumblr_fEcYAzdstyh6tqIsI In the beginning, Professional revenues were focused on the 3D visualisation end of the spectrum, with initial sales going into workstations that were edging out the customised builds made by Silicon Graphics. Fairly quickly, however, GPUs added more and more functionality and started to turn into general parallel data processors rather than being solely optimised towards graphical processing. As this change took place, people in scientific computing noticed, and started using GPUs to accelerate scientific workloads that involve very parallel computation, such as matrix manipulation. This started at the workstation level, but by 2007 NVDA decided to make a new line-up of Tesla series cards specifically suited to scientific computing. The professional segment now have several points of focus:
GPUs used in workstations for things such as CAD graphical processing (Quadro Line)
GPUs used in workstations for computational workloads such as running engineering simulations (Quadro Line)
GPUs used in workstations for machine learning applications (Quadro line.. but can use gaming cards as well for this)
GPUs used by enterprise customers for high performance computing (such as modelling oil wells) (Tesla Line)
GPUs used by enterprise customers for machine learning projects (Tesla Line)
GPUs used by hyperscalers (mostly for machine learning projects) (Tesla Line)
In more recent times, given the expansion of the Tesla line, NVDA has broken up reporting into Professional Visualisation (Quadro Line) and Datacenter (Tesla Line). Here are the revenue splits since that reporting started: 📷 https://hyperinflation2020.tumblr.com/private/618394798232158208/tumblr_3AdufrCWUFwLgyQw2 📷 https://hyperinflation2020.tumblr.com/private/618394810632601600/tumblr_2jmajktuc0T78Juw7 It is worth stopping here and thinking about the huge increase in sales delivered by the Tesla line. The reason for this huge boom is the sudden increase in interest in numerical techniques for machine learning. Let’s go on a brief detour here to understand what machine learning is, because a lot of people want to hype it but not many want to tell you what it actually is. I have the misfortune of being very familiar with the industry, which prevented me from buying into the hype. Oops – sometimes it really sucks being educated. What is Machine Learning? At a very high level, machine learning is all about trying to get some sort of insight out of data. Most of the core techniques used in machine learning were developed a long time ago, in the 1950s and 1960s. The most common machine learning technique, which most people have heard of and may be vaguely familiar with, is called regression analysis. Regression analysis involves fitting a line through a bunch of datapoints. The most common type of regression analysis is called “Ordinary Least Squares” OLS regression, and that type of regression has a “closed form” solution, which means that there is a very simple calculation you can do to fit an OLS regression line to data. As it happens, fitting a line through points is not only easy to do, it also tends to be the main machine learning technique that people want to use, because it is very intuitive. You can make good sense of what the data is telling you and can understand the machine learning model you are using. Obviously, regression analysis doesn’t require a GPU! However, there is another consideration in machine learning: if you want to use a regression model, you still need a human to select the data that you want to fit the line through. Also, sometimes the relationship doesn’t look like a line, but rather it might look like a curve. In this case, you need a human to “transform” the data before you fit a line through it in order to make the relationship linear. So people had another idea here: what if instead of getting a person to select the right data to analyse, and the right model to apply, you could just get a computer to do that? Of course the problem with that is that computers are really stupid. They have no preconceived notion of what data to use or what relationship would make sense, so what they do is TRY EVERYTHING! And everything involves trying a hell of a lot of stuff. And trying a hell of a lot of stuff, most of which is useless garbage, involves a huge amount of computation. People tried this for a while through to the 1980s, decided it was useless, and dropped it… until recently. What changed? Well we have more data now, and we have a lot more computing power, so we figured lets have another go at it. As it happens, the premier technique for trying a hell of a lot of stuff (99.999% of which is garbage you throw away) is called “Deep Learning”. Deep learning is SUPER computationally intensive, and that computation happens to involve a lot of matrix multiplication. And guess what just happens to have been doing a lot of matrix multiplication? GPUs! Here is a chart that, for obvious reasons, lines up extremely well with the boom in Tesla GPU sales: 📷 https://hyperinflation2020.tumblr.com/private/618394825774989312/tumblr_IZ3ayFDB0CsGdYVHW Now we need to realise a few things here. Deep Learning is not some magic silver bullet. There are specific applications where it has proven very useful – primarily areas that have a very large number of very weak relationships between bits of data that sum up into strong relationships. An example of ones of those is Google Translate. On the other hand, in most analytical tasks, it is most useful to have an intuitive understanding of the data and to fit a simple and sensible model to it that is explainable. Deep learning models are not explainable in an intuitive manner. This is not only because they are complicated, but also because their scattershot technique of trying everything leaves a huge amount of garbage inside the model that cancels itself out when calculating the answer, but it is hard to see how it cancels itself out when stepping through it. Given the quantum of hype on Deep learning and the space in general, many companies are using “Deep Learning”, “Machine Learning” and “AI” as marketing. Not many companies are actually generating significant amounts of tangible value from Deep Learning. Back to the Competitive Picture For the Tesla Segment So NVDA happened to be in the right place at the right time to benefit from the Deep Learning hype. They happened to have a product ready to go and were able to charge a pretty penny for their product. But what happens as we proceed from here? Firstly, it looks like the hype from Deep Learning has crested, which is not great from a future demand perspective. Not only that, but we really went from people having no GPUs, to people having GPUs. The next phase is people upgrading their old GPUs. It is much harder to sell an upgrade than to make the first sale. Not only that, but GPUs are not the ideal manifestation of silicon for Deep Learning. NVDA themselves effectively admitted that with their latest iteration in the Datacentre, called Ampere. High Performance Computing, which was the initial use case for Tesla GPUs, was historically all about double precision floating point calculations (FP64). High precision calculations are required for simulations in aerospace/oil & gas/automotive. NVDA basically sacrificed HPC and shifted further towards Deep Learning with Ampere, announced last Thursday. The FP64 performance of the A100 (the latest Ampere chip) increased a fairly pedestrian 24% from the V100, increasing from 7.8 to 9.7 TF. Not a surprise that NVDA lost El Capitan to AMD, given this shift away from a focus on HPC. Instead, NVDA jacked up their Tensor Cores (i.e. not the GPU cores) and focused very heavily on FP16 computation (a lot less precise than FP64). As it turns out, FP16 is precise enough for Deep Learning, and NVDA recognises that. The future industry standard is likely to be BFloat 16 – the format pioneered by Google, who lead in Deep Learning. Ampere now does 312 TF of BF16, which compares to the 420 TF of Google’s TPU V3 – Google’s Machine Learning specific processor. Not quite up to the 2018 board from Google, but getting better – if they cut out all of the Cuda cores and GPU functionality maybe they could get up to Google’s spec. And indeed this is the problem for NVDA: when you make a GPU it has a large number of different use cases, and you provide a single product that meets all of these different use cases. That is a very hard thing to do, and explains why it has been difficult for competitors to muscle into the GPU space. On the other hand, when you are making a device that does one thing, such as deep learning, it is a much simpler thing to do. Google managed to do it with no GPU experience and is still ahead of NVDA. It is likely that Intel will be able to enter this space successfully, as they have widely signalled with the Xe. There is of course the other large negative driver for Deep Learning, and that is the recession we are now in. Demand for GPU instances on Amazon has collapsed across the board, as evidenced by the fall in pricing. The below graph shows one example: this data is for renting out a single Tesla V100 GPU on AWS, which isthe typical thing to do in an early exploratory phase for a Deep Learning model: 📷 https://hyperinflation2020.tumblr.com/private/618396177958944768/tumblr_Q86inWdeCwgeakUvh With Deep Learning not delivering near-term tangible results, it is the first thing being cut. On their most recent conference call, IBM noted weakness in their cognitive division (AI), and noted weaker sales of their power servers, which is the line that houses Enterprise GPU servers at IBM. Facebook cancelled their AI residencies for this year, and Google pushed theirs out. Even if NVDA can put in a good quarter due to their new product rollout (Ampere), the future is rapidly becoming a very stormy place. For the Quadro segment The Quadro segment has been a cash cow for a long time, generating dependable sales and solid margins. AMD just decided to rock the boat a bit. Sensing NVDA’s focus on Deep Learning, AMD seems to be focusing on HPC – the Radeon VII announced recently with a price point of $1899 takes aim at NVDAs most expensive Quadro, the GV100, priced at $8999. It does 6.5 TFLOPS of FP64 Double precision, whereas the GV100 does 7.4 – talk about shaking up a quiet segment. Pulling things together Let’s go back to what NVidia fundamentally does – paying their engineers to design chips, getting TSMC to print those chips, and getting board partners in Taiwan to turn them into the final product. We have seen how a confluence of several pieces of extremely good fortune lined up to increase NVidia’s sales and profits tremendously: first on the Gaming side, weak competition from AMD until 2014, coupled with a great product in form of Pascal in 2016, followed by a huge crypto driven boom in 2017 and 2018, and on the Professional side, a sudden and unexpected increase in interest in Deep Learning driving Tesla demand from 2017-2019 sky high. It is worth noting what these transient factors have done to margins. When unexpected good things happen to a chip company, sales go up a lot, but there are no costs associated with those sales. Strong demand means that you can sell each chip for a higher price, but no additional design work is required, and you still pay the printer, TSMC, the same amount of money. Consequently NVDA’s margins have gone up substantially: well above their 11.9% long term average to hit a peak of 33.2%, and more recently 26.5%: 📷 https://hyperinflation2020.tumblr.com/private/618396192166100992/tumblr_RiWaD0RLscq4midoP The question is, what would be a sensible margin going forward? Obviously 33% operating margin would attract a wall of competition and get competed away, which is why they can only be temporary. However, NVidia has shifted to having a greater proportion of its sales coming from non-OEM, and has a greater proportion of its sales coming from Professional rather than gaming. As such, maybe one can be generous and say NVDA can earn an 18% average operating margin over the next cycle. We can sense check these margins, using Intel. Intel has a long term average EBIT margin of about 25%. Intel happens to actually print the chips as well, so they collect a bigger fraction of the final product that they sell. NVDA, since it only does the design aspect, can’t earn a higher EBIT margin than Intel on average over the long term. Tesla sales have likely gone too far and will moderate from here – perhaps down to a still more than respectable $2bn per year. Gaming resumes the long-term slide in discrete GPUs, which will likely be replaced by integrated GPUs to a greater and greater extent over time. But let’s be generous and say it maintains $3.5 Bn Per year for the add in board, and let’s assume we keep getting $750mn odd of Nintendo Switch revenues(despite that product being past peak of cycle, with Nintendo themselves forecasting a sales decline). Let’s assume AMD struggles to make progress in Quadro, despite undercutting NVDA on price by 75%, with continued revenues at $1200. Add on the other 1.2Bn of Automotive, OEM and IP (I am not even counting the fact that car sales have collapsed and Automotive is likely to be down big), and we would end up with revenues of $8.65 Bn, at an average operating margin of 20% through the cycle that would have $1.75Bn of operating earnings power, and if I say that the recent Mellanox acquisition manages to earn enough to pay for all the interest on NVDAs debt, and I assume a tax rate of 15% we would have around $1.5Bn in Net income. This company currently has a market capitalisation of $209 Bn. It blows my mind that it trades on 139x what I consider to be fairly generous earnings – earnings that NVidia never even got close to seeing before the confluence of good luck hit them. But what really stuns me is the fact that investors are actually willing to extrapolate this chain of unlikely and positive events into the future. Shockingly, Intel has a market cap of 245Bn, only 40Bn more than NVDA, but Intel’s sales and profits are 7x higher. And while Intel is facing competition from AMD, it is much more likely to hold onto those sales and profits than NVDA is. These are absolutely stunning valuation disparities. If I didn’t see NVDA’s price, and I started from first principles and tried to calculate a prudent price for the company I would have estimated a$1.5Bn normalised profit, maybe on a 20x multiple giving them the benefit of the doubt despite heading into a huge recession, and considering the fact that there is not much debt and the company is very well run. That would give you a market cap of $30Bn, and a share price of $49. And it is currently $339. Wow. Obviously I’m short here!
How Not To Pressurize Your Mind While Trading Crypto?
As a crypto trader, you must realize that crypto trading is time consuming. It’s hard not to be checking prices and price charts every second when you know your money is on the line. People get so obsessed with social media and crypto trends, and it seeps into their dreams. The worst part is it never ends, and it never takes a break. The market is always open, and trades occur at all times; people stare at the screen all day trying to cash in on an opportunity. The volatility of the crypto market is why it is easy to get stress out; your mind is continually thinking of new ways to enhance trades, and a time might come when the pressure becomes overwhelming. This will affect your ability to make the right decisions, and your health will be at risk. Some signs that your mind is pressured and experiencing trading stress include being hyper-alert, short, and shallow breaths every time a trade goes awry, experiencing sick feelings and other negative feelings.
It’s pretty clear that detaching from crypto isn’t easy in no way; it’s a skill that takes time to cultivate. However, it would be best if you detached to shield your mental health during crypto trading or investments. Below are tips that help keep the pressure at bay:
Fix Trading Hours
Even though exchanges are available at all hours of the day, you don’t have to be also. Your body is not a machine and needs some time off. You can’t always cash in on all trade opportunities even if you’re up at all times, so take a break. Fix hours that you will trade and stick to those hours, no matter the temptation.
Losses should be treated as a Learning Opportunity
When a trade goes awry, there’s usually a lesson to learn from it. Don’t let it be an avenue for despair or feeling anger, instead learn from the situation to make better decisions next time. Sometimes, it is not your fault, and you were just unlucky; focus on the longterm plan and don’t get overwhelmed. Analysis always helps to minimize emotional outbursts.
Put the Emotions Aside
When you trade with your emotions, you’re bound to have lots of issues. The pressure on your mind could be overwhelming. That’s why it is essential to tame the feelings; cold logic works best in crypto trading. If you don’t control your emotions, crypto trading will drive you insane. Don’t feel too good and don’t feel too bad about wins and losses; endeavor to strike a balance.
Make a plan and don’t deviate
What’s your plan going into trading? Once you have an idea about how you intend to trade instead of simply willing the charts to go in a particular direction, trading becomes more natural. To minimize the pressure of trading, plan out your trade in advance and set your point of entry and exits. Don’t change your plans in the middle of a trade because you feel a particular way; don’t deviate from your plan.
Have a second source of income
It’s hard to keep the pressure off if it is your source of livelihood; rational thinking goes out the winder when your income source is on the line. That’s why you need to have another source of income; it’s easy to keep calm when you don’t depend on crypto trading to survive.
At the beginning of trading, keeping a positive mindset is easy. But then the first trade goes wrong, and the pressure kicks in, the need to always make the right choice and never miss any opportunity. However, if you practice these tips and stick to them, you will find a way to stop your mind from being pressured in crypto trading. Getting to this point is crucial; it will make you a better trader. If you like it, never hesitate to give us thumbs-up
With Bitcoin Suddenly Surging, Canaan Stock Is Also Going Up Today
Both incidents in question: (1) Think my browser was hijacked for this one. Guessing a rogue extension? While I was in the process of transferring crypto to a hardware wallet (SecuX), the web wallet fed me an address which I only only later discovered is different from the hw wallet's default one (I was in a hurry, didn't verify the discrepancy, first time using it for that particular crypto). Immediately transferred all the remaining crypto in that wallet out just in case it's the wallet that's unsafe. This happened last Friday. Spent a good couple of hours that day resetting my PC. (2) Got an alert at about 3:30am this morning alerting me to crypto/stock buys made on my Robinhood account. Woke up before trading hours and cancelled the stock buys. Went ahead and also sold the cryptos bought earlier for a small profit before resetting my password and finally installing 2fa. Is there anything else I should be on the lookout for? Not seeing any irregular logins to my password generator. Def will be patrolling for any abnormal processes and getting rid of as many Chrome extensions as possible. Possibly thinking about turning an old notebook into a Linux machine (gonna be a learning curve).
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
Technology and some more:
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
Down the rabbit hole
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here. Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017. Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand. Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”.Scilla design story part 1
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
“Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
Smart contract on a sharded environment and state sharding
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
Business & Partnerships
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
Marketing & Community
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
Machine learning is a highly effective tool for developing trading systems for Bitcoin and other cryptocurrencies. This post will explore some of the concepts that apply, potential issues you may encounter, and the competencies you’ll need to develop your own machine-learning based trading system. “Machine learning is the idea that there are generic algorithms that can tell you something interesting about a set of data without you having to write any custom code specific to the problem. Machine learning (ML) is increasingly being used in trading in order to help investors with improving their decision-making and trade execution strategies. However, ML poses various challenges Download Citation | On May 1, 2019, Jifeng Sun and others published Using machine learning for cryptocurrency trading | Find, read and cite all the research you need on ResearchGate Check Machine Learning Cryptocurrency Trading Bot If you are looking for an easy-to-use trading robot that can be up to date with constantly changing market conditions, you will have to look elsewhere. However, if you are looking for a robot with an advanced feature set, Cryptohopper is a good choice.
Tutorial: How to Predict Bitcoin Price with Machine Learning
In this quick tutorial, we'll see how price prediction of Bitcoin or any other cryptocurrency can be done with LSTM networks in Python using Tensorflow and K... If you want to utlilise the power of machine learning to predict price in cryptocurrency you need to be paying attention to the right things. ... Crypto Trading With Neural Networks: Machine ... Cryptocurrency Trading with Machine Learning. Learn more at https://crypto-ml.com/ Crypto-ML is a machine-learning cryptocurrency platform designed for people who want to generate real income... Can you predict the Bitcoin Price with Machine Learning? It seems like it's possible! Using an LSTM algorithm, I showcase how you can use machine learning to predict prices of cryptocurrencies. Discover how to prepare your computer to learn and build a strong foundation for machine learning In this series, quantitative trader Trevor Trinkino will wa...