Next Article in Journal
Political Uncertainty and Initial Public Offerings: A Literature Review
Previous Article in Journal
Lines of Credit and Family Firms: The Case of an Emerging Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Winner Strategies in a Simulated Stock Market

Graduate School of Management and Economics, Sharif University of Technology, Teymoori Sq., Tehran 145997-3941, Iran
*
Author to whom correspondence should be addressed.
Int. J. Financial Stud. 2023, 11(2), 73; https://doi.org/10.3390/ijfs11020073
Submission received: 13 March 2023 / Revised: 15 May 2023 / Accepted: 22 May 2023 / Published: 30 May 2023

Abstract

:
In this study, we explore the dynamics of the stock market using an agent-based simulation platform. Our approach involves creating a multi-strategy market where each agent considers both fundamental and technical factors when determining their strategy. The agents vary in their approach to these factors and the time interval they use for technical analysis. Our findings indicate that investing heavily in reducing the value–price gap was a successful strategy, even in markets where there were no trading forces to reduce this gap. Furthermore, our results remain consistent across various modifications to the simulation’s structure.

1. Introduction

John Maynard Keynes (1936) once compared the stock market to a “beauty contest”, in which participants select the most aesthetically pleasing pictures, but the prize is awarded to those who align with the most popular choices. Keynes argued that the outcome of such a contest was not the most attractive picture, but rather the one which people guess should be the dominant choice. However, there is a crucial difference between a stock market and a “beauty contest”: in a stock market, companies generate profits and distribute a portion of them as dividends. Earnings and dividends change the zero-sum game of a stock market into a positive-sum game. At least, since the publication of Security Analysis by Graham and Dodd (1934)1, investors have tried to determine the fundamental values of stocks by projecting future dividends and comparing those to market prices. In the authors’ words, “although in the short run, the stock market acts such as a voting machine, in the long run, it plays the role of a weighing machine [measuring values, not opinions]”.
In this paper, we model the stock market’s dynamics as a mixed game, considering both the “beauty contest” aspect of the market and the fundamental values of the stocks. Our goal is to identify the strategies that lead to the highest possible returns for investors in an interactive environment where the prices themselves are the result of the strategies selected. We demonstrate that, even if participants, on average, do not give much consideration to the fundamental values of stocks, over time, the market behaves like a weighing machine and the fundamental strategies ultimately emerge victorious.
The remainder of this paper is structured as follows: Section 2 presents a comprehensive literature review. Section 3 provides a detailed description of the model. Section 4 discusses the model’s robustness and primary findings. Section 5 offers concluding remarks, while the appendices contain supplementary discussions and calculations.

2. Literature Review

The market comprising risky assets, investors, and capital flows, presents a compelling platform for study from various perspectives. Investors endeavor to adopt strategies that yield higher returns while also withstanding unfavorable market conditions. To identify suitable stocks, investors track fundamental values, price trends, or a combination of both. Fundamental values of stocks are typically estimated by predicting future dividends and discounting them by appropriate discount rates. However, the dividends themselves and the required rates of return, serving as discount rates, are intertwined with the investors’ predisposition towards different stocks through the strategies they choose. This interdependency creates a loop that links asset prices to investors’ strategies.
In an attempt to break out of this loop, classical finance aims to view the market from a macro-level, seeking common factors that can explain the overall dynamics. Fama and French (1993, 2015), and Ross (1976) are the very first researchers among many who employ this common factor view. In regards to the efficacy of investment strategies, a multitude of approaches have been proposed and evaluated in comparison to one another, e.g., Jegadeesh and Titman (1993), Shleifer and Vishny (1997). In the financial literature, the works of Kai-Ineman and Tversky (1979) should be noted, as they developed behavioral finance and have made valuable contributions, especially in explaining market anomalies.
Another avenue for comprehensively examining the market, its assets, investors, and capital flows is through the employment of an agent-based model (ABM). This model considers the market as an arena for agents with distinct investment strategies to interact, thus determining asset prices endogenously. Through the manipulation of parameters in these models, controlled experiments can be designed and executed to test hypotheses.
A noteworthy early ABM was proposed by Kim and Markowitz (1989) to simulate the 1987 market crash. This model features two trader types: negative feedback traders, who construct their portfolios based on a return-variance framework, and positive feedback traders, who seek to limit their losses by emulating a put option. The study aims to investigate the conditions under which a market becomes unstable.
Another prominent ABM is the Levy et al. (1994) model, which incorporates a risky asset that pays dividends. Agents in this model maximize their portfolios based on a logarithmic utility function and calculate expected returns by comparing the dividends with the prevailing price levels. The Levy model is widely utilized to demonstrate the occurrence of market booms and busts.
During the 1990s, the Santa Fe Institute for the study of complex systems developed an ABM specifically designed for financial markets, commonly referred to as the Santa Fe Institute Artificial Stock Market (SFI-ASM). Palmer et al. (1994) published an early iteration of the model, with significant findings presented by Arthur et al. (1996) and LeBaron et al. (1999). LeBaron subsequently investigated several variations of the SFI-ASM in 2001 and 2002 (LeBaron (2001, 2002)). Within the SFI-ASM, agents respond to fundamental and technical signals and modify their strategies utilizing a genetic algorithm (GA). Each agent is provided with a set of rules from which it selects via a classifier. The complex system shares certain characteristics with an actual stock market, particularly in a fast-updating mode. For a comprehensive explanation of the model, we refer the reader to Ehrentreich (2008).
Evstigneev et al. (2009) developed an artificial stock market wherein stock dividends are randomly paid out following a uniform distribution, with investors able to select from a limited number of established strategies. The authors demonstrate that, irrespective of the scenario utilized, the sole surviving strategy that dominates the market is one that evaluates stocks based on their anticipated dividends. In this model, all agents maintain their strategies unchanged, resulting in a passive evolution, as defined by LeBaron (2011).
Each of the three ABMs described above assumes rule-based agents, which adhere to the rules specified by the model. However, an advanced type of agent, referred to as goal-based, strives to achieve objectives without adhering to any predetermined rules. The Lux (1998) and Lux and Marchesi (2000) (LM) model presents these agents, wherein two types of traders participate: fundamental and technical (optimists or pessimists). One distinguishing characteristic of the LM model is the direct interaction between agents. In most ABMs, agents primarily interact through the endogenous pricing mechanism, whereby their trading activities impact prices, providing insights into the behaviors of other agents. However, in the LM model, agents actively adjust their strategies by engaging in direct communication with their counterparts and comparing returns, deviating from the conventional interaction methods of most ABMs. The LM model can replicate unique features of financial markets, such as heavy tails for returns and high volatility. In our model, we also incorporate goal-based agents.
A recent trend in ABMs is to incorporate machine learning techniques, similar to the approach adopted in actual financial markets (Meng and Khushi (2019) provides a comprehensive review on the topic). Natranj and Leidner (2019) and Maeda et al. (2020) are two papers that equip agents with deep-learning capabilities to enhance their modeling and analytical capacities.
Numerous researchers, including El Oubani and Lekhal (2022) and Westerhoff (2008), employ ABMs to investigate specific regulations on financial markets. Initially, they construct a stock market that mimics real-world characteristics, then introduce a specific rule and examine its impact on the market. In this type of research, little attention is paid to the performance of individual market participants, particularly over a prolonged period.
In a series of papers, Evstigneev et al. (2006, 2009, 2016), Palczewski et al. (2016), and Hens and Schenk-Hoppé (2020) develop evolutionary finance models that examine a market involving fundamental traders, who compare stock prices with perceived fundamental “values”, and technical traders, who trade by observing price trends. These papers investigate the performance of fundamental investors and focus on identifying the dominant strategy over an extended duration.
Among the papers referenced, the work of Hens and Schenk-Hoppé (2020) bears the closest resemblance to our own design. The authors construct an artificial market featuring three types of funds—fundamental, trend-chasing, and noise trading—and two assets—risky and riskless. Each agent is assigned a patience parameter, representing their endurance in the fund they initially select. By manipulating the patience parameter, the authors demonstrate that the greater the patience of fundamental investors, the larger their market share and the greater proportion of risky assets they attract to their portfolio.
Similarly, we investigate a market containing a risky and a riskless asset. However, rather than dividing investors into three distinct groups with varying degrees of patience, we permit investors to form a spectrum, encompassing pure fundamentalists to pure trend-chasers, along with a diverse range in between.
An examination of over fifty years of contemporary finance literature reveals that scholars have predominantly concentrated on broader market characteristics, such as market efficiency and the risk–reward relationship. Nevertheless, in recent years there has been a discernible shift towards the methods of portfolio management and stock selection, as well as the attributes a stock must possess to surpass its peers. Noteworthy works in this vein include those by Frazzini et al. (2018), Asness et al. (2019), Kozak et al. (2020), Hou et al. (2022), and Gai (2022). Our study aims to address the same question, but from an agent-based standpoint.

3. Model

This section presents the introduction of our proposed model. Our model assumes that investors possess knowledge of the underlying fundamental value of the stock market and hold the belief that price patterns exist. These investors aim to enhance their returns by forecasting these patterns. To develop our model, we follow the framework proposed by LeBaron (2001) and describe each component of our model design in detail.

3.1. Agents

Agent-based models place a significant emphasis on the role of agents, as highlighted by LeBaron 2001. The sophistication of agents is a crucial factor in these models, with various levels of complexity being possible. For example, simple rule-based agents have been utilized in studies by Evstigneev et al. (2009), while other studies have employed learning agents that maximize utility, such as the well-known Santa Fe Institute’s Artificial Stock Market (SFI-ASM).
The present study establishes the agents’ objectives as the maximization of wealth at the conclusion of the simulation. Given the underlying assumption of a friction-less market, this objective is akin to maximizing the growth rate during each period. This approach, commonly referred to as the Kelly strategy (Kelly 1956), shares similarities with the maximization of logarithmic utility as described by (Hakansson 1971). While the appropriateness of the Kelly strategy as a utility function remains a topic of discussion (Rubinstein 1976; Samuelson and Merton 1974), it appears to be compatible with the research objectives of this study. It should be noted that agents are not granted autonomy in determining their consumption levels, a matter that will be further elaborated upon later. By employing the Kelly strategy, evaluating the success of agents becomes relatively straightforward, with the agent accumulating the highest wealth being declared the winner. Additionally, agents are motivated to avoid bankruptcy at all costs, as bankruptcy results in negative infinity utility and eliminates any possibility of accruing future wealth.
We assume that cash dividends are paid during each period, and agents consume these dividends in the same period. If agents were to retain these extra resources, the cumulative demand would increase, leading to a consistent inflation of prices. To ensure that all agents trade under equal conditions, the ratio of consumption to wealth must be the same across all agents. Consequently, in each period, some agents must divest themselves of the risky asset to obtain sufficient cash, while others possess excess cash that can be invested.
At the onset of the simulation, all agents are endowed with an equal amount of money. While agents share certain characteristics, they are assigned distinct parameters to ensure the uniqueness of their strategies. These agents estimate the anticipated return of holding the stock for a single period, utilizing the following formula:
R = D P + 1 P V P c a t + tfp × TREND ,
where D is the expected dividend for the next time period and P is the current price, so that the first term stands for the expected dividend yield. The variable V represents the fundamental value of stock. The other parameters in Formula (1) are described in the following paragraph.
The agents in this study are in search of trends in the price trajectory. Each agent is assigned a fixed time period, denoted as dur. At each time step t, an agent examines the market returns in two consecutive periods, specifically [ t 2 ( dur ) , t dur ] and [ t dur , t ] . If the returns exhibit the same sign in both periods, agents infer the presence of a trend and assign to the TREND in Formula (1): 1 for consecutive negative signs and + 1 for consecutive positive signs. If no trend is detected, agents set the value of TREND equal to zero. Given this design and the stochastic nature of price movements, approximately half of the agents will observe a trend in a given stock price, while the other half will not. The agents possess identical estimations of the fundamental value of the risky asset, but they differ in their estimates of the time required for price convergence to this value. To account for this variability, each agent is assigned a parameter, denoted as “catalyst” (cat). Agents with a low cat anticipate a rapid convergence, with the fundamental component of the expected return (the second term in Formula (1)) becoming exceedingly large in absolute terms. The values of cat and dur are fixed for each agent and are sampled from an exponential distribution with a mean of 100, approximately two years. Similarly, the “trend following preference” parameter, denoted as tfp, represents the degree of trend chasing the agents have; it is fixed and randomly sampled from a uniform distribution ranging from 0 to 1.2.
This study intentionally deviates from the design of a realistic market, a decision that requires clarification. In actual stock markets, trader beliefs frequently initiate a self-fulfilling phenomenon. Any influencing factor, regardless of its relevance to the underlying business’s realities, can prompt movement in the stock price if traders act upon it. In this dynamic, the first movers often secure significant profits. To determine whether a value investor can genuinely achieve superior performance independent of this effect, we develop a market that, unlike the actual market, does not inherently favor value investing on average. We refer to this as a “fundamentally neutral” market. As we highlight in our conclusion, our primary findings become even more pronounced when simulating with more realistic assumptions.
To construct a fundamentally neutral market, the direction of the categorical parameters is reversed for fifty percent of the agents. As a result, half of the agents partake in the acquisition of stocks when deemed overvalued, while the other half behave in an opposing manner. Although this assumption may appear unrealistic, we chose it to neutralize the effect of fundamental investors demand and are therefore assured that the advantages displayed by the fundamental strategies are primarily attributed to dividends rather than being solely driven by demand.
In the present configuration, the agents possess three distinct degrees of freedom, which are denoted by the parameters cat, dur, and tfp. In order to generate interactions among as many strategies as possible, these three parameters are randomly assigned for each agent. Consequently, the number of strategies is equivalent to the number of agents. It is noteworthy to mention that the reference made to the number of agents or strategies pertains to a singular value in accordance with the aforementioned explanation.
As previously emphasized, the agents’ logarithmic utility implies the avoidance of bankruptcy by utmost effort, because any possibility of negative wealth corresponds to a negative infinity utility. However, in discrete-time simulations, the potential for a sudden price movement always exists, thereby introducing the risk of bankruptcy for any amount of leverage or short-selling of the risky asset. So, it is never advantageous for agents to incur even a minimal amount of liability. Therefore, we exclude the borrowing of money and short selling of stock from our simulation.

3.2. Assets

The market comprises two assets, namely a risk-free asset that exhibits a zero rate of return and possesses an infinite supply (cash) and a risky asset (stock) that is characterized by a fixed, limited supply and pays dividends in each period. The dividends generated by the stock follow a discrete-time stochastic process that is mean-reverting in nature, where the mean-reverting coefficient is relatively small and is employed to stabilize the dividend level at a steady state. Owing to their lack of awareness regarding the mean-reverting dividends, agents rely on the most recent dividends to estimate the fundamental value. To this end, the dynamics of dividends from the perspective of agents can be expressed mathematically as follows:2
D t + 1 D t = 1 + σ D Z t ,
where D t is the dividend at time t and σ D is the variance of dividend growth. The stock is subject to two distinct sources of risk, namely the volatility of its price and the uncertainty associated with the stochastic dividend process, which jointly contribute to the movement of returns. At the commencement of each simulation step, referred to as a “tick,” the dividend payout is disclosed, leaving price movements as the sole source of risky returns.
We assume that the agents consider a two-state probability distribution for the returns, which conforms to the expectation and variance of the actual returns. This probabilistic framework enables the agents to apply the well-known Kelly criteria and determine the optimal investment in risky asset, which can be expressed as follows (see Appendix B for more details):
α * = 1 σ 2 μ ,
where α * is the optimal fraction of wealth allocated to the risky asset, μ is the expected return of investment, and σ 2 is the variance rate of the risky asset (variance of its price percentage change). If all agents allocate this same fraction to the stock, α * would be V / ( V + C ) , where V signifies the fundamental value of the stock and C represents the total amount of cash available in the market.
The fundamental value is characterized as the equilibrium price that would result in the market clearing if every agent were to hold the risky asset exclusively for its dividend yield. In such a scenario, the expected return for each agent would be given by μ = D / V . Moreover, the fundamental values exhibit a linear relationship with dividends, at least for minor price fluctuations. Consequently, the percentage changes in dividends and fundamental values are subject to the same stochastic process, with equal variances σ 2 = σ D 2 . Altogether, Formula (3) transforms to:
V V + C = 1 σ D 2 D V .
This quadratic equation has the following positive solution:
V = 1 2 σ D 2 ( D + D 2 + 4 C D σ D 2 ) ,
which represents a shared reference point that all agents take into account when appraising the stock price.

3.3. Market

In the context of our discrete-time market simulation, each period encompasses approximately one week, with the aim of approximating the variance of dividends. In this study, we contend that the total demand of the risky asset in each period is a decreasing function of the price, and subsequently we can identify the price that clears the market. Our analysis focuses on Formula (1), which represents the return of the risky asset in each period. This return is composed of three components, namely the dividend yield ( D P ), the fundamental yield ( 1 P V P c a t ), and the technical premium ( tfp × TREND ). The dividend yield is established as a decreasing function of price, and therefore our attention is directed to the other components of return. For the second term, the fundamental yield, we observe that its positivity or negativity is contingent on the state of the price (P) relative to the fundamental value (V) and the sign of the cat, which we deliberately set as +1 for half of the agents and –1 for the other half. Thus, the fundamental yield can be regarded as the neutral component of return with respect to the price. Finally, for the technical premium of return, as previously stated in Section 3.1, the agents rely on the two preceding time intervals, [ t 2 ( dur ) , t dur ] and [ t dur , t ] , which eliminates the potential technical preference for price increases3. In summary, in each period, the dividend yield component of return is the foremost decreasing part concerning the price, and thus the demand is also a decreasing function of price. Therefore, calculating the clearing price can be accomplished through a simple numerical resolution process that involves solving the optimization problem, where the total demand is equated to the fixed supply.

3.4. Simulation

To calculate the technical component of returns, our agents rely on historical data, which is not accessible to them at the start of the simulation. To address this issue, we train the market using agents’ decisions and clearing prices for several hundred periods, without including actual trades in the model. During the training period, agents’ wealth and portfolios remain unchanged. Once sufficient historical data is generated, actual trades commence. Throughout each period, agents consume all dividends, and we assume a passive evolution, as described in LeBaron (2011), where agents do not adjust their investment strategy. However, over time, some agents become wealthier, while others become relatively poorer, resulting in an increasing number of successful trades and gradual evolution of the market.
Each agent has a unique set of parameters, resulting in multiple unique strategy simulations running simultaneously. As demonstrated in the subsequent section, ample evidence suggests that only a few hundred agents are required to run the model effectively.
At the beginning of the simulation, each agent is provided with equal proportions of the stock and an equal amount of additional cash. As previously stated, the initial few periods correspond to the “simulation” mode, where supply, demand, and pricing mechanisms are implemented, but no actual trades are conducted. Therefore, at the onset of the “real” mode, each agent possesses adequate price history to determine its technical return. The agents do not engage in any direct interactions with one another, and pricing serves as the sole mechanism of interaction.

4. Results

4.1. Classification of Results

In an ABM, four distinct states emerge based on the known or unknown status of inputs and outputs. This demarcation, however, is somewhat idealistic, as the categorization of an input as known or unknown is not always unambiguous. For instance, while a general comprehension of investors’ decision-making frameworks may be available, numerous investors in any given market possess investment strategies that remain incompletely understood, even by the investors themselves. In the present analysis, we seek to differentiate between anticipated results and those revealed upon the simulation’s completion. In this section, we initially present the findings, demonstrate their significance, and verify that they do not stem from random occurrences. Subsequently, we delve into the insights these results provide concerning the attributes of successful and unsuccessful strategies.

4.2. Identifying Winners by Scores

In simulations such as ours, the availability of data is not a concern, and the primary issues relate to computational power and storage capacity. However, the key challenge arises during the data interpretation phase. Standard statistical tests and p-values, which are commonly used for data interpretation, may not be the most suitable choice in such scenarios, as elaborated in Appendix A. In this section, we present the results alongside the rationale for the tests employed to verify their robustness.
Due to the stochastic nature of simulations, it is essential to conduct multiple simulations and aggregate their outcomes. While summing the wealth of individual agents at the end of each simulation may seem straightforward, the challenge arises when the collective wealth of all agents is significantly elevated due to the occurrence of a randomly high valuation of the risky asset. As a result, the impact of individual simulations may not be uniformly distributed. Normalizing the wealth of all agents does not address this issue, as the primary competition is often concentrated among a few agents, and the normalized wealth of the remaining agents is effectively zero.
To mitigate this issue and obtain more robust results, we employ a scoring scheme similar to those used in sporting events. In each iteration of the simulation, the wealthiest agent is awarded 10 points, the next in rank receives 8 points, the third receives 6 points, and so on. Our comprehensive scoring system is designed as 10 , 8 , 6 , 5 , 4 , 3 , 2 , 1 , 0 , 0 , , where agents beyond the top eight receive no points. This scoring system enables the simplified and efficient analysis of hundreds of simulation rounds with numerous agents, as presented in Table 1, Table 2, Table 3 and Table 4.
Another aspect that we seek to examine is the role of the random price seed in the simulation outcomes. If there exist winning strategies, altering the random price seed4 should not result in significant changes to the results. To test this hypothesis, we conducted 100 simulations of the market using the same 200 agents and assessed the stability of the winning strategies. As presented in Table 1, several winning strategies retained their positions across different random price seeds. For instance, the top-performing strategy for agent 68 garnered approximately 50 % of the possible maximum 1000 points, a position that can be substantiated by standard statistical tests (see Appendix A).
In order to demonstrate the statistical significance of our findings, we conducted a series of simulations a total of 100 times. Each simulation was run 100 times, resulting in a total of 10,000 simulations. The ( 5 % , 95 % ) confidence interval for each value is shown in Table 1. During the course of these simulations, we observed that the outcomes were sensitive to the randomness present in the data. Even after 2000 ticks, we found that for each random seed, the rankings varied, indicating that no single strategy was absolutely dominant. We chose a duration of 2000 ticks for our simulations, which roughly corresponds to a few decades. This period is short enough to be considered a life-long investment, yet long enough to allow for unambiguous observation of the results.
To assess the robustness of our findings with respect to the number of agents involved in the simulations, we initiated the simulations with 10 agents and added additional agents in each iteration. We then recorded the results and normalized the scores of the agents based on the percentage of simulations in which they participated. The scores are presented in Table 2. Our analysis indicates that there is no significant difference between the normalized and real scores, and the best agents remained the same with only minor changes in their ranking. This finding suggests that changing the number of agents involved in the simulations would not have a dramatic impact on the results.
In prior research, such as that conducted by Evstigneev et al. (2009), simulations have been used to approximate real-world market conditions. These studies have shown that fundamental investors often require a significant amount of time to capture a meaningful market share. In our current study, we have set the dividend and price variance levels to be roughly equivalent to actual weekly levels. To observe the stages of change during the simulations, we captured eight snapshots at various time intervals (ticks 10, 20, 50, 100, 200, 500, 1000, and 1500) before reaching the final tick value of 2000. We report the scores associated with these snapshots in Table 3, including only those time stamps in which at least one agent achieves a top ten ranking.
Our findings suggest that, typically, the winning agents exhibit a clear advantage at an early stage of the simulations, although this advantage may not be apparent in the first few ticks. As the simulations progress, the relative wealth of the top-performing agents increases, with their rankings reflecting their continued edge.
Table 3 reveals a range of behaviors among the simulated strategies. Some of the top-performing agents demonstrate their advantages from the beginning of the simulation, while others gradually increase their scores over time. Conversely, certain strategies that initially appear promising may lose their edge, either gradually or abruptly. These latter strategies appear to be more technical in nature, as they tend to lose their advantages in a market dominated by fundamental factors.
The last two rows of the table provide additional insights into the distribution of scores over time. The row labeled s u m t shows the sum of scores for the top ten agents at each time interval, from 10 to 2000. At the beginning of the simulation, there is a high concentration of scores among the top-performing agents. However, over time, their advantages tend to diminish and the scores become more dispersed. Toward the end of the simulation, other groups of agents appear to gain more relative strength in the market. The final row of the table, labeled s u m T , is almost monotonically increasing, as we track the final top ten agents and expect them to improve their performance through each trial until the end.
In our next test, we conduct simulations in which we concurrently adjust the agents’ parameters, rather than making random adjustments. Specifically, we multiply the parameters cat and dur by a value between 1/3 and 3 and examine the resulting outcomes. To compare these results with those from our initial simulations, we sum the scores of the agents and present them in Table 4. Our analysis indicates that there is no significant difference between the results in Table 4 and those from the earlier simulations. This finding is both essential and trivial, as it demonstrates that the intensity with which agents incorporate value–price gaps or trend patterns in their calculations does not significantly impact their performance. Rather, it is the relative intensity with which they react to market conditions, as compared to their peers, that determines their success.
Thus far, our findings have demonstrated that our results are not random and are robust when subjected to certain modifications. In the next section, we will identify winning strategies based on their parameters.

4.3. Parameter Distribution of Winners

Up to this point, we have used fixed agents, meaning that agent #9, for example, was always the same agent. In this section, we will randomly assign parameters to the agents. This approach will serve two purposes. Firstly, it will enable us to verify the robustness of our results. Secondly, it will allow us to obtain the parameter distribution of the winning strategies. As previously described, each agent has three specific parameters: catalyst (cat), duration (dur), and trend following preference (tfp). For our verification test, we conduct 100 simulations, each with different agents. To analyze the results, we examine the cat, dur, and tfp parameters of the winning agents and compare them to the parameter distribution of the entire population. The results of this analysis are presented in the three parts of Table 5.
Table 5 shows that the most effective method for maximizing the likelihood of winning is to choose a low positive cat number. Agents within the cat range of 1 to 5 have more than ten times the chance of reaching the top positions than their percentage of the population would suggest. For instance, these agents occupy the first position 26 times, despite the likelihood of having this cat number being only 2% in the population. The next best strategy corresponds to a low negative cat number. This observation may seem paradoxical, as it suggests that two opposite strategies could yield similarly good results. However, it is worth noting that a low absolute value of cat (either positive or negative) creates an extreme strategy with positions that change dramatically and quickly. This type of trading sometimes creates a trend that other agents adopt, thereby incurring advantageous benefits for the original creators of the trend, even if the strategy has no other original edge. Hence, agents with a low negative cat, i.e., powerful counter value investment strategies, have a significant chance of winning. Nevertheless, we also observe in the table that agents with a low positive cat have far more chances of winning (see the first part of Table 5, rows 1 5 and 1…5). Furthermore, the agents with negative cat generally perform worse than those with a positive cat. Additionally, in the mid-range cat (from 20 to 200), it can be observed that fundamental strategies (positive cats) lead to the best overall performances.
Another observation that can be drawn from Table 5 is the potential correlation between low dur parameters and the likelihood of achieving good results. However, this correlation may be due to excessive trading and potential trend-creating consequences, which could have a high chance of backfiring (as discussed in the next section).
To further investigate the relationships between the agents’ parameters and their chances of winning, we study these parameters pairwise in a two-part table. The first part of Table 6 is designed to show the number of top-ranking agents for each pair of (catdur) in the specified ranges. The number in each cell represents the total number of top-ten agents with the associated catdur parameters of the cell. The numbers in circles indicate the total number of first-ranked agents. Although having a small cat or dur parameter appears to confer a considerable advantage (as observed in Table 5), we do not observe any clear superior catdur strategy.
The second part of Table 6 examines the effects of dur and tfp parameters from the perspective of winning strategies. Interestingly, our analysis shows that low dur, i.e., short-period trading, does not necessarily need to be coupled with high tfp, i.e., forceful trading on trends, to achieve good results. In fact, high tfp can lead to strategies with excessive risks and damage outcomes. However, low tfp does not result in successful strategies either.

4.4. Parameter Distribution of Losers

To ensure that the results reported in Table 5 are not simply the outcome of taking extreme risks, we now investigate the characteristics of the worst-performing agents. We begin by examining the parameter intervals of the losing agents. In Table 7, we present the results in five tiers, ranging from the worst 1% to the worst 50%. Our earlier observations indicated that a low cat value, whether positive or negative, creates a risky strategy with a significant chance of poor performance, but a low negative cat value has a higher chance of failure (as observed in Table 5). Here, in the mid-range cat values in Table 7, we observe that a positive cat value has a lower chance of being at the bottom. Additionally, we observe that low dur values increase the probability of poor performance. This latter observation suggests that the good performance of low dur may be solely due to the chance associated with an extremely risky strategy. Another noteworthy observation pertains to the effect of tfp on bad performances. In previous sections, we were unable to identify a winning strategy based on tfp. However, in Table 7, we observe that increases in tfp are uniformly associated with a higher chance of being among the bottom tiers.

5. Conclusions

In this study, we constructed a simulated stock market in which every investor selects an optimal portfolio based on both fundamental and technical considerations. Specifically, investors invest in one risky asset (i.e., a stock) that pays dividends and one riskless asset. The investors in our model agree on the stock’s fundamental value. However, the differences in agents’ investing strategies arise from their predictions of the value–price gaps and price trends. Unlike many models in the literature (e.g., (Levy et al. 1994), EHS, SFI-ASM), our agents do not simply review the price-dividend ratio of a stock. Instead, they analytically calculate the stock’s fundamental value and use it as a benchmark for the stock price. Our primary goal is to study the performance of different strategies, rather than the market environment as a whole. To the best of our knowledge, beyond the work of Lo et al. (2018), which studies the relative performance of agents from a similar perspective, little research has been presented from this perspective in the literature.
In this study, we develop an unconventional market simulation in which the intrinsic value of the risky asset is disregarded on average. This experimental design enables us to examine three strategies that yield superior returns: (1) betting on the convergence of the value–price gap, (2) betting on the divergence of the value–price gap, and (3) betting on the continuation of the price trend.
We observe that the performance of the second strategy is an artifact of our simulation design, as this approach would be untenable in the real world. The entire market would act against such an irrational strategy, leaving it with no chance of success. In contrast, although the third strategy exhibits some effectiveness, its return is not as favorable as that of the first strategy, and its associated risk is substantially higher.
It is essential to note that our setup is heavily biased in favor of momentum traders, as it lacks counter-momentum traders to counterbalance their influence. Furthermore, our design disadvantages value investors due to the presence of unrealistic counter-value investors. We propose that value investing, unlike other strategies, does not partake in a zero-sum game. Consequently, its performance is not solely reliant on forecasting the actions of other investors and outperforming them, although this may be beneficial.
Our simulations were conducted under certain assumptions that deviate from the real world, such as the presence of counter-value investors and the absence of counter-momentum traders. However, simulations conducted under more realistic assumptions, more closely mirroring the real world where most investors act upon the convergence of the value–price gap rather than its divergence, and where some investors deploy counter-momentum strategies effectively wagering on the reversal of price trends, demonstrate that, under these conditions, value investors can achieve even superior returns with reduced risk, while the performance of momentum traders tends to deteriorate. We have chosen to omit these results from the present discussion as the results already presented sufficiently illustrate our primary findings.
Through running the strategies in a biased environment, in contrast to value investors, and subjecting them to several robustness checks, we can confidently conclude that the fundamental strategies, which heavily bet on value–price gaps, exhibit superior performance.

Author Contributions

A.T. developed the theoretical formalism, and performed the simulations. Both A.T. and S.Z. contributed to the final version of the manuscript. S.Z. supervised the research. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The simulation code is available upon request.

Acknowledgments

The authors would like to thank Farshad Haghpanah for his valuable suggestions which help us improving our model.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Statistical Significance

In simulation studies, abundant data is typically available; however, researchers must develop suitable methods for filtering out relevant information from the data set. Given that the amount of data generated can be as much as required, the key factors to be considered are time and computing power in order to achieve a desired level of statistical significance. In this paper, we delve further into this issue and propose an ad hoc scoring scheme, analogous to those used in sporting events, to compare agents and present the outcomes of 100 simulations for multiple agents in a single table. This approach is exemplified by the application of conventional statistical methods to evaluate the significance of the results.
Consider the best agent among the competing 200 agents, who received 474 scores and ranked first 26 times out of 100. By the binomial distribution, the probability of this record is:
P = 200 × k = 26 100 100 k k ( 199 / 200 ) 100 k ,
although the calculation of the expression is relatively straightforward, using an approximation would better illustrate our point. We replace the last term with one and the initial term with its maximum value to obtain the following upper bound:
p < 200 × 60 × 100 26 ( 1 / 200 ) k ,
which is lower than 10 10 .
We expand the proposed method to address the evaluation of agents that may not be prominent. To illustrate this, we consider the case of agent 16, who did not rank first in any of the 100 simulations, but was ranked lower than 38 only 13 times. Utilizing a binomial test, we demonstrate that the likelihood of achieving a performance better than the 10th position is lower than 10 10 . These examples highlight the ease with which statistical significance can be established through appropriate testing, provided that the pertinent questions are framed and significant relationships are identified. Indeed, it is essential to pose the correct questions and pinpoint the crucial connections to discern the statistical relevance of the results. A trained observer can readily recognize when a question of statistical significance is moot.

Appendix B. Kelly in Investment

We consider an investor who is confronted with a risky asset that exhibits a two-state outcome: a gain of g with a probability of p and a loss of l with a probability of q = 1 p . To maximize the expected growth rate, the investor must determine the optimal fraction ( α ) of their wealth to invest in this asset. Thus, the problem can be formulated as follows:
max r ( α ) = ( 1 + α g ) p · ( 1 α l ) q ,
or taking logarithm,
max ln r ( α ) = p ln ( 1 + α g ) + q ln ( 1 α l ) .
For the latter the first-order condition reads, as follows:
d r d α | α = α * = p g 1 + α * g + q l 1 α * l = 0 ,
leading to
α * = p l q g .
The expected return and variance of this risky asset are, respectively, given by the following relations
μ = p g q l ,
σ 2 = p q ( g + l ) 2 ,
where p = q = 0.5 ; for g and l at the same order of magnitude, one obtains the following:
σ 2 = p q ( g + l ) 2 0.5 · 0.5 · 4 g l g l .
Thereby,
α * = p l q g = p g q l g l μ σ 2 .

Notes

1
the oldest source we could find in this regard was Fetter (1904), which was cited in Herbener and Holcombe (1999).
2
In the simulation, a slightly modified formula is utilized to ensure that the dividends remain positive and do not deviate significantly from the initial value
d t + 1 = d ¯ + ρ ( d t d ¯ ) + σ d Z t
where d t + 1 is the logarithm of the dividend at time t + 1, d ¯ is the long-run average of the logarithm of the dividend, and ρ is the mean-reversion coefficient.
3
A higher price increases the probability of agents calculating a positive trend, subsequently leading to a positive technical premium. This is why the technical component of the return is computed using the previous periods’ stock prices.
4
In a pseudo-random number generator such as the one we used, a seed is needed to initiate the generator. Starting with different seeds results in different series of numbers.

References

  1. Arthur, W. Brian, John H. Holland, Blake LeBaron, Richard Palmer, and Paul Tayler. 1996. Asset pricing under endogenous expectations in an artificial stock market. In Working Papers 96-12-093, Santa Fe Institute, Reprinted in The Economy as an Evolving Complex System II. Boca Raton: CRC Press, pp. 15–44. [Google Scholar]
  2. Asness, Clifford S., Andrea Frazzini, and Lasse Heje Pedersen. 2019. Quality minus junk. Review of Accounting Studies 24: 34–112. [Google Scholar] [CrossRef]
  3. Ehrentreich, Norman. 2008. Agent-Based Modeling: The Santa Fe Institute Artificial Stock Market Model Revisited. Berlin and Heidelberg: Springer. [Google Scholar]
  4. El Oubani, Ahmed, and Mostafa Lekhal. 2022. An agent-based model of financial market efficiency dynamics. Borsa Istanbul Review 22: 699–710. Available online: https://www.sciencedirect.com/science/article/pii/S2214845021000995 (accessed on 25 May 2023). [CrossRef]
  5. Evstigneev, Igor V., Thorsten Hens, and Klaus Reiner Schenk-Hoppé. 2006. Evolutionary stable stock markets. Economic Theory 27: 449–68. [Google Scholar] [CrossRef]
  6. Evstigneev, Igor V., Thorsten Hens, and Klaus Reiner Schenk-Hoppé. 2009. Evolutionary finance. In Handbook of Financial Markets: Dynamics and Evolution. Amsterdam: Elsevier, pp. 507–66. [Google Scholar]
  7. Evstigneev, Igor V., Thorsten Hens, and Klaus Reiner Schenk-Hoppé. 2016. Evolutionary behavioral finance. In The Handbook of Post-Crisis Financial Modeling. London: Palgrave Macmillan, pp. 214–34. [Google Scholar]
  8. Fama, Eugene F., and Kenneth R. French. 1993. Common risk factors in the returns on stocks and bonds. Journal of Financial Economics 33: 3–56. [Google Scholar] [CrossRef]
  9. Fama, Eugene F., and Kenneth R. French. 2015. A five-factor asset pricing model. Journal of Financial Economics 116: 1–22. [Google Scholar] [CrossRef]
  10. Fetter, Frank A. 1904. The Principles of Economics: With Applications to Practical Problems. New York: Century Company. [Google Scholar]
  11. Frazzini, Andrea, David Kabiller, and Lasse Heje Pedersen. 2018. Buffett’s alpha. Financial Analysts Journal 74: 35–55. [Google Scholar] [CrossRef]
  12. Gai, Junyue. 2022. Buffett’s value investment theory based on improved dcf model. Wireless Communications and Mobile Computing 2022: 4293248. [Google Scholar] [CrossRef]
  13. Graham, Benjamin, and David Le Fevre Dodd. 1934. Security Analysis. New York: McGraw-Hill. [Google Scholar]
  14. Hakansson, Nils H. 1971. Multi-period mean-variance analysis: Toward a general theory of portfolio choice. The Journal of Finance 26: 857–84. [Google Scholar]
  15. Hens, Thorsten, and Klaus Reine Schenk-Hoppé. 2020. Patience is a virtue: In value investing. International Review of Finance 20: 1019–31. [Google Scholar] [CrossRef]
  16. Herbener, Jeffrey M., and Randall G. Holcombe. 1999. Frank A. Fetter: A Forgotten Giant. 15 Great Austrian Economists. Auburn: The Ludwig von Mises Institute, pp. 123–41. [Google Scholar]
  17. Hou, Kewei, Haitao Mo, Chen Xue, and Lu Zhang. 2022. The economics of security analysis. Management Science. Available online: https://pubsonline.informs.org/doi/10.1287/mnsc.2022.4640 (accessed on 26 May 2023).
  18. Jegadeesh, Narasimhan, and Sheridan Titman. 1993. Returns to buying winners and selling losers: Implications for stock market efficiency. The Journal of Finance 48: 65–91. [Google Scholar] [CrossRef]
  19. Kai-Ineman, Daniel, and Amos Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica 47: 363–91. [Google Scholar]
  20. Kelly, John L. 1956. A new interpretation of the information rate, bell systems tech. IRE Transactions on Information Theory 2: 185–89. [Google Scholar] [CrossRef]
  21. Keynes, John Maynard. 1936. The General Theory of Employment, Interest and Money. New York: Harcourt Brace and Co. [Google Scholar]
  22. Kim, Gew-rae, and Harry M Markowitz. 1989. Investment rules, margin, and market volatility. Journal of Portfolio Management 16: 45–52. [Google Scholar] [CrossRef]
  23. Kozak, Serhiy, Stefan Nagel, and Shrihari Santosh. 2020. Shrinking the cross-section. Journal of Financial Economics 135: 271–92. [Google Scholar] [CrossRef]
  24. LeBaron, Blake. 2001. Evolution and time horizons in an agent based stock market. Macroeconomic Dynamics 5: 225–54. [Google Scholar] [CrossRef]
  25. LeBaron, Blake. 2002. Short-memory traders and their impact on group learning in financial markets. Proceedings of the National Academy of Sciences of the United States of America 99: 7201–6. [Google Scholar] [CrossRef] [PubMed]
  26. LeBaron, Blake. 2011. Active and passive learning in agent-based financial markets. Eastern Economic Journal 37: 35–43. [Google Scholar] [CrossRef]
  27. LeBaron, Blake, W. Brian Arthur, and Richard Palmer. 1999. Time series properties of an artificial stock market. Journal of Economic Dynamics and Control 23: 9–10. [Google Scholar] [CrossRef]
  28. Levy, Moshe, Haim Levy, and Sorin Solomon. 1994. A microscopic model of the stock market: Cycles, booms, and crashes. Economics Letters 45: 103–11. [Google Scholar] [CrossRef]
  29. Lo, Andrew W., H. Allen Orr, and Ruixun Zhang. 2018. The growth of relative wealth and the kelly criterion. Journal of Bioeconomics 20: 49–67. [Google Scholar] [CrossRef]
  30. Lux, Thomas. 1998. The socio-economic dynamics of speculative markets: Interacting agents, chaos and the fattails of return distribution. Journal of Econ Behav Organ 33: 143–65. [Google Scholar] [CrossRef]
  31. Lux, Thomas, and Michele Marchesi. 2000. Volatility clustering in financial markets. International Journal Theor Appl Finance 3: 675–702. [Google Scholar] [CrossRef]
  32. Maeda, Iwao, David DeGraw, Michiharu Kitano, Hiroyasu Matsushima, Hiroki Sakaji, Kiyoshi Izumi, and Atsuo Kato. 2020. Techniques for verifying the accuracy of risk measurement models. Journal of Risk and Financial Management 13: 71. [Google Scholar] [CrossRef]
  33. Meng, Terry Lingze, and Matloob Khushi. 2019. Reinforcement learning in financial markets. Data 4: 110. [Google Scholar] [CrossRef]
  34. Natranj, Raman, and Jochen L. Leidner. 2019. Financial market data simulation using deep intelligence agents. Paper presented at International Conference on Practical Applications of Agents and Multi-Agent Systems, Ávila, Spain, June 26–28; pp. 200–11. [Google Scholar]
  35. Palczewski, Jan, Klaus Reiner Schenk-Hoppé, and Tongya Wang. 2016. Itchy feet vs. cool heads: Flow of funds in an agent-based financial market. Journal of Economic Dynamics and Control 63: 53–68. [Google Scholar] [CrossRef]
  36. Palmer, Richard G., W. Brian Arthur, John H. Holland, Blake LeBaron, and Paul Tayler. 1994. Artificial economic life: A simple model of a stock market. Physica D: Nonlinear Phenomena 75: 264–74. [Google Scholar] [CrossRef]
  37. Ross, Stephen A. 1976. The arbitrage theory of capital asset pricing. Journal of Economic Theory 13: 341–60. [Google Scholar] [CrossRef]
  38. Rubinstein, Mark. 1976. The strong case for the generalized logarithmic utility model as the premier model of financial markets. The Journal of Finance 31: 551–71. [Google Scholar] [CrossRef]
  39. Samuelson, Paul A., and Robert C. Merton. 1974. Generalized mean-variance trade offs for best perturbation corrections to approximate portfolio decisions. The Journal of Finance 29: 27–40. [Google Scholar]
  40. Shleifer, Andrei, and Robert W. Vishny. 1997. The limits of arbitrage. The Journal of Finance 52: 35–55. [Google Scholar] [CrossRef]
  41. Westerhoff, Frank H. 2008. The use of agent-based financial market models to test the effectiveness of regulatory policies. Jahrbücher für Nationalökonomie und Statistik 228: 195–227. [Google Scholar] [CrossRef]
Table 1. Agents’ scores, positions, and the corresponding confidence intervals, in 10,000 simulations.
Table 1. Agents’ scores, positions, and the corresponding confidence intervals, in 10,000 simulations.
Number of Gained Positions
AgentScore1st2nd3rdRank:4th... 6thRank:7th... 9thOther Ranks
68474 (397, 547)26 (18, 33)12 (8, 17)9 (5, 14)13 (8, 19)5 (2, 9)35
194420 (353, 506)29 (22, 38)10 (5, 15)4 (1, 7)7 (4, 11)5 (2, 9)46
184332 (276, 384)2 (0, 4)16 (10, 23)14 (9, 20)23 (16, 30)8 (4, 12)38
16287 (239, 340)0 (0, 2)6 (2, 10)17 (11, 24)29 (20, 37)9 (5, 15)38
118268 (224, 314)5 (2, 8)12 (7, 17)9 (5, 14)14 (8, 19)12 (7, 17)48
138216 (164, 274)6 (3, 10)9 (3, 14)6 (3, 10)10 (5, 14)6 (3, 10)63
110154 (118, 187)0 (0, 0)0 (0, 1)0 (0, 2)32 (23, 39)18 (13, 24)49
172142 (113, 174)0 (0, 0)0 (0, 1)0 (0, 1)29 (22, 38)26 (20, 34)45
144105 (67, 149)2 (0, 4)3 (0, 7)3 (1, 7)9 (4, 14)5 (2, 9)78
183 (37, 140)4 (1, 8)3 (1, 6)2 (0, 4)3 (0, 6)1 (0, 3)88
Note: Ten best agents according to their average scores. In each row, the scores and top positions gained are presented (in bold font) along with the confidence interval (in parentheses) for 100 simulations. Some agents have significant advantages over others. This observation rejects the randomness of agents’ returns.
Table 2. Agents’ original and normalized scores in 100 simulations.
Table 2. Agents’ original and normalized scores in 100 simulations.
AgentOriginal ScoreReduced ScoreNormalized Score
68453467667
19440238475
18431733275
16284545568
118245234509
13817746128
110155144288
17214729161
11122323
1448339
Note: The scores of the top 10 agents in Table 1 (original score) are compared with the scores they received in the new simulations with a fewer number of agents (reduced score). To achieve a fair comparison, we increased the agents’ scores based on the percentage of simulations in which they participated (normalized score).
Table 3. Agents’ scores over time.
Table 3. Agents’ scores over time.
Time
Agent102050100200500100015002000
108154216252243153136112
8009210066139125
16418449201885790156213284
22356357148372031342039
32512489200573416472528
35912038374951413932
6878278137014786151277349453
69626588696100887850
740024022414974592032
890710714316116010710167
9707728897100726349
110293279113433057100125155
11211978271116121518
1182979735593157198274245
11909138147162141838370
1250087117137133974938
1290431155462332076
138027229483312322328262177
144000152820575883
1722351957820173164104147
173026130222210300
179732222313838352921
1810042809583666455
18465665829412871109175255317
19402872169206286354400402
s u m t 353534082104187416671802194522192375
s u m T 241325041584136411521466186221762375
Note: Agents’ scores at different time stamps before the final time. The sums of the scores for the top-ten agents are presented in the snapshots shown at the top of each column. The last row represents the sums of the scores of the final top-ten agents at each snapshot. From this table, we infer that changing the tick number beyond 2000 would not affect our results significantly.
Table 4. Changing agents’ parameters concurrently.
Table 4. Changing agents’ parameters concurrently.
AgentOriginal ScoreScaled Score
68453469
19440275
184317369
16284280
11824573
13817748
110155217
172147169
111255
144830
Note: In 100 simulations, we scale the agents’ cat and dur parameters from approximately one-third to up to three times the original ones and compare the total scores.
Table 5. Parameter distribution of winners.
Table 5. Parameter distribution of winners.
cat
Number of Gained Positions
cat Range1st2nd3rdRank:4th... 6thRank:7th... 9thAll (PDF)
900 500 00100<1%
500 300 000002%
300 200 300004%
200 100 5010212%
100 50 2544612%
50 20 332131211%
20 10 27311244%
10 5 43526252%
5 1 981221102%
1 5 26343270232%
5 10 4171672622%
10 20 33429794%
20 50 1365111811%
50 100 63361112%
100 200 121511912%
200 300 1311494%
300 500 6649102%
500 900 11230<1%
dur
Number of Gained Positions
dur Range1st2nd3rdRank:4th... 6thRank:7th... 9thAll (PDF)
1 5 4181118165%
5 10 55213115%
10 20 79918259%
20 50 172423655821%
50 100 131626697124%
100 200 122815767823%
200 300 55925259%
300 500 04515154%
500 ⋯ 700010111%
700 ⋯ 100000000<1%
tfp
Number of Gained Positions
tfp Range1st2nd3rdRank:4th... 6thRank:7th... 9thAll (PDF)
0.0 0.1 956364710%
0.1 0.2 10919353710%
0.2 0.4 8169323310%
0.4 0.5 131210363110%
0.5 0.6 8713332710%
0.6 0.7 61510232410%
0.7 0.8 8815312010%
0.8 1 12147223310%
1.0 1.1 1674281910%
1.1 1.2 977232810%
Note: The number of top-ranking agents for different ranges of cat, dur, and tfp. We present the distribution of specified parameters for winning agents in each of the three parts of the table, comparing them to the probability distribution of the same parameter (last column). Each row represents the range of the parameter being studied, and each column shows the observed ranks. It is important to note that in the fourth and fifth columns, we add the number of agents across three ranks, resulting in numbers three times the usual (the sum of each column rounded to 300 instead of 100).
Table 6. The catdur, and durtfp distributions of winners.
Table 6. The catdur, and durtfp distributions of winners.
cat vs. dur
Number of Gained Positions for Each Time Stamp
cat Range51020501002003005007001000
900 500 1000000000
500 300 0000000000
300 200 1 001 1 00000
200 100 4 01 2 1 00000
100 50 311447 2000
50 20 3 2 2999 3100
20 10 03 213 13167200
10 5 06 418 19 164 200
5 1 4 25 19 1216 5 010
1 5 4 619 49 42 48 14 1300
5 10 5 21337 5153 151000
10 20 5 781545 41161310
20 50 24 5 9 9552200
50 100 15 158310000
100 200 16 3 4 10 420100
200 300 5 0125122100
300 500 7 106856 210
500 900 1 011050010
dur vs. tfp
Number of Gained Positions for each tfp
dur Range0.120.240.360.480.600.720.840.961.081.20
1 5 5 9 10 16 8 11 9 11 11 7
5 10 107 25 4523 01
10 20 10136 8 66 6 9 3 7
20 50 20 26 17 32 25 1023 15 18 17
50 100 33 28 171922 15 162823 18
100 200 2822 39 28 1819 2324 16 19
200 300 7 10 97712 8556
300 500 4372666535
500 700 0010000030
700 1000 0000000000
Note: The simultaneous effects of two parameters on winning strategies. The number in each cell represents the total number of top-ten agents with the associated parameters. The numbers in circles indicate the total number of first-ranked agents. Our analysis suggests that each parameter class has its effect individually, as observed, for instance, for low dur values coupled with low tfp values.
Table 7. The parameter distribution of losers.
Table 7. The parameter distribution of losers.
cat
Number of Gained Positions
cat Rangeworst 1%worst 5%worst 10%worst 25%worst 50%all
900 500 00000<1%
500 300 8109642%
300 200 9109864%
200 100 131416151612%
100 50 578101512%
50 20 255111411%
20 10 123764%
10 5 124432%
5 1 235432%
1 5 222212%
5 10 023212%
10 20 035434%
20 50 106710711%
50 100 121198812%
100 200 2215106912%
200 300 843234%
300 500 432112%
500 900 00000<1%
dur
Number of Gained Positions
dur Rangeworst 1%worst 5%worst 10%worst 25%worst 50%all
1 5 1075435%
5 10 332315965%
10 20 31262215119%
20 50 182529282521%
50 100 41015212324%
100 200 2610152123%
200 300 024579%
300 500 001244%
500 ⋯ 700000001%
700 ⋯ 100000000<1%
tfp
Number of Gained Positions
tfp Rangeworst 1%worst 5%worst 10%worst 25%worst 50%all (PDF)
0.0 0.1 1235810%
0.1 0.2 2557910%
0.2 0.4 8568910%
0.4 0.5 87891010%
0.5 0.6 111110111010%
0.6 0.7 111111111010%
0.7 0.8 131213111110%
0.8 1 151313111110%
1.0 1.1 141715131110%
1.1 1.2 181716131110%
Note: The worst performing agents for different ranges of cat, dur, and tfp. The structure of this table is similar to Table 5 but instead of winning agents, each column represents a tier of losing agents, from the worst 1% to the worst 50% (bottom half).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Taherizadeh, A.; Zamani, S. Winner Strategies in a Simulated Stock Market. Int. J. Financial Stud. 2023, 11, 73. https://doi.org/10.3390/ijfs11020073

AMA Style

Taherizadeh A, Zamani S. Winner Strategies in a Simulated Stock Market. International Journal of Financial Studies. 2023; 11(2):73. https://doi.org/10.3390/ijfs11020073

Chicago/Turabian Style

Taherizadeh, Ali, and Shiva Zamani. 2023. "Winner Strategies in a Simulated Stock Market" International Journal of Financial Studies 11, no. 2: 73. https://doi.org/10.3390/ijfs11020073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop