This application can be used to manage stock transactions and maintaining a diary by adding stocks, transactions for these stocks, feedback and strategies. Based on all transactions key performance indicators etc. are calculated for different time periods. Also stock quotes can be downloaded automatically.
This application already exists in an old version which was developed by the motto "getting things done" and by utilizing a relational database. I've decided to re-develop the whole application with techniques which are more sophisticated like CQRS and Eventsourcing.
The project StockTradingAnalysis.Web.Migration
is used to load the data from the relational database of my legacy application to the object oriented database by firing commands and thus using the event sourcing system (can be ignored by you).
Setup project with MSSQL (RavenDB exists as well)
- Create a new database
- Update connection strings in
Web.config
- Execute the script
StockTradingAnalysis.Data.MSSQL.Scripts.Create_DataStore_Table.sql
- If RavenDB shall be used then bindings in
\StockTradingAnalysis.Web\App_Start\BindingModules\EventSourcingBindingModule.cs
need to be changed - Hangfire currently needs an SQl database and uses the MSSQL connection string, see
Startup.cs
- Run project
StockTradingAnalysis.Web
- Open Administration in the GUI and generate test data
- Command Query Responsibility Segregation (CQRS)
- Eventsourcing (ES)
- RavenDB
- Bootstrap
- SignalR
- ReactJs.NET
- Axios
- Automapper
- Hangfire
The architecture was designed like most ES systems, but of course simpler than for example a professional product like Event Store. Core components for the ES system are the event store which reads and writes the events, the event bus which distributes the events to the event handlers. The persistence layer is controlled by a DBMS dependent implementation called persistent event store. There is one for RavenDB and one for MS SQL.
The system supports snapshots - a projection of the current state of an aggregate - as well. If snapshots for an aggregate are explicitly activated, the event store asks the snapshot processor to take care, when events are persisted. Then the snapshot processor calculates if a snapshot is needed, then the snapshot store persists it.
All reads and writes are separated by using the CQRS architectural pattern. The domain data to be written is transported by commands. These commands are sent to a command dispatcher which locates the correct command handler for this command. In the command handler the aggregate repository (in fact every aggregate type has it's own repository) returns the aggregate by loading and applying all events since existence of this aggregate. Then the command can be applied to the aggregate and uncommitted events are written to the event store.
The process manager coordinator is involved when several events or commands need to be orchestrated. The coordinator is observing all events and commands from the eventbus and the command dispatcher. In case of a message it asks the process manager finder repository if an instance of the correct process manager for a message is already running. The identification is done via a correlation id, which is a mapping of some information of a message to a process manager. In case no instance can be found the correct manager is created. Every process manager can publish events or dispatch commands and has a (in memory) state.
Aggregates as well as the MVC controllers, query handlers and event handlers use domain services (may depend on external services) when domain logic is needed.
Every event which is persisted will be published to the event bus. An event handler catches the event and stores an optimized read model in the correct model repository. This is done in memory, but it could also be persisted to a database.
On the read side, the MVC controllers for example asking the query dispatcher to return the domain model for a given query. The query handler which was implemented to handle this specific query retrieves the model from the model repository and returns it. Then Automapper maps the domain model to a view model which was requested by the controller and the data can be pushed towards the frontend.
For Web Sockets the SignalR hub retrieves the data also from the query dispatcher and sends it to the frontend.
log4net is configured to write to files and Application Insights uses log4net to send monitoring data to Azure.
The instrumentation key is configured in \StockTradingAnalysis.Web\Views\Shared\_Layout.cshtml
and \StockTradingAnalysis.Web\ApplicationInsights.config
which needs to be replaced to send to the correct Application Insights node.
- Dashboard
- Savings plan based on assigned categories for transactions
- Open positions
- Security Name, first buying date, position size, shares, average price, quote, YTD profit, overall profit
- KPIs
- Amount of trades
- Amount of winning/losing trades
- Amount of trades per year/month/week
- System quality number (SQN)
- Pay-Off-Ratio
- CRV
- Expected values
- Average win/loss
- Maximum win/loss
- Average buying volume
- Order costs/taxes
- Average holding period intraday/long term positions
- Best asset class
- Best assert
- Maximum drawdown
- Maximum consecutive winners/losers
- Exit/Entry efficiency
- Transactions
- Buy, sell, dividend, split/reverse split
- Filter by time range and asset type
- Security
- Add, edit, delete, update quotes
- Aggregated absolute profit per security
- Transaction history and latest quote per security
- Candlestick chat based on security quotes with flags for buy, sell, dividend and split. Also the average price is highlighted as a line-graph
- Feedback
- Add, edit, delete
- Percentage of transactions assigned to feedbacks
- Strategies
- Add, edit, delete
- Calculation for buying decisions and open positions
- Stop loss, take profit, price, amount etc. can be configured
- Historical/daily basis quotes can be downloaded (if stock with WKN was configured)
- Administration
- Test data generation
- Scheduler for the background jobs which are downloading quotes
- Dashboard
- Statistics: Last 10/25/50/75/100/150/All Trades (profit, loss, payoff-ratio, CRV,CQN etc.)
- Performance: over asset class, long/short, strategy, monthly performance
- Potential: MAE,MFE
- G/V: Expected value, trade stability, G/V size distribution, depot P&L, hit rate
- Risk: cluster analysis, monte carlo simulation
- Savings plan: products, export
- Performance
- All performance key indicators with filtering over time range & asset class
- Transactions
- Export
Calculations Transactions Security details Security charts Dashboard KPIs