May

25

Closing Time

May 25, 2021 | Leave a Comment

Sushil Kedia writes:

Closing TIme of key contracts, around the world has the same character feeding the vig. Irrespective of whether this character speaks Japanese, Korean, Chinese, Malaysian, Hindi, Pashto, Hebrew, German, English or American English. 

The compulsion to not carry a losing trade overnight, to square off excess positions that cannot be funded overnight etc. etc. provide a good enough number of hands who are willing to be forced out and required to be forced out at close. 

If I can spot, from my back-benches in global finance a ready made bunch of pigs to be slaughtered everyday, I am wondering why wont the 200 Billion Dollar Liquidity pumps whether run by a rocket scientist or by anyone would not be already squeezing them hour by hour as the sun moves from the East to the West? 

Wondering what is a good way to structure a study that tries to isolate statistical evidence for reversing extremas N minutes before the close of related exchanges. 

Say the Closing TIme of top 5 liquidity producing exchangs of crude future world over are noted down and a statistical study of N minutes before closing time and after closing time of each of these 5 exchangs throws up a pattern? 

And then if equity index futures that produce the top 10 volumes even if each equity index contrat is a distinct entity, is there a closing time ebb and flow that is being created by the Scientists' algorithms?  

Victor Niederhoffer writes

This is a very interesting and  an suggestive post. let's have some   feedback on ow to approach this query 

Jared Albert  writes

I think closing time/price as the sole predictor is too broad and noise will swamp any effect. 

So, to me,  the first step is classify the various conditions that exist before the close. For example, days up vs down, up/down on day, distance from x day max/min etc.

There are so many predictor variables that I don't think this is a frequentist kind of problem lending itself to logistic regression and lots of crosstabs for example.

So step one is a machine learning classification model to separate the states using the closing time movement as the target for training. 

IF it turns out that there are classifiable 'set-ups', then one could run analysis within the most promising classifications.

Apr

10

Zubin Al Genubi writes:

The all time usual rate is about 4% as I recall.

Jared Albert writes

https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2020/eight-centuries-of-global-real-interest-rates-r-g-and-the-suprasecular-decline-1311-2018

The graphs are a ton of fun to flip through.

Apr

10

https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2020/eight-centuries-of-global-real-interest-rates-r-g-and-the-suprasecular-decline-1311-2018
The graphs are a ton of fun to flip through.
On Thursday, April 8, 2021, 06:54:39 PM EDT, Zubin Al Genubi <zubin.al.genubi@gmail.com> wrote:
The all time usual rate is about 4% as I recall.

Feb

21

what is a Recent

February 21, 2021 | Leave a Comment

Big AI writes:

from 2005 (may have been posted already):

Does Trend Following Work on Stocks?

Cole Wilcox, Managing Partner Director of Research & Trading Blackstar 

Funds, LLC

Eric Crittenden, Blackstar Funds, LLC

https://www.cis.upenn.edu/~mkearns/finread/trend.pdf

Over the years many commodity trading advisors, proprietary traders,

and global macro hedge funds have successfully applied various trend

following methods to profitably trade in global futures markets. Very

little research, however, has been published regarding trend following

strategies applied to stocks. Is it reasonable to assume that trend

following works on futures but not stocks? We decided to put a long

only trend following strategy to the test by running it against a

comprehensive database of U.S. stocks that have been adjusted for

corporate actions. Delisted companies were included to account for

survivorship bias. Realistic transaction cost estimates (slippage &

commission) were applied. Liquidity filters were used to limit

hypothetical trading to only stocks that would have been liquid enough

to trade, at the time of the trade. Coverage included 24,000+

securities spanning 22 years. The empirical results strongly suggest

that trend following on stocks does offer a positive mathematical

expectancy, an essential building block of an effective investing or

trading system.

Jared Albert  writes:

This is obviously the hardest(or most expensive) part of a study like this:

<<<Data Integrity Data Coverage The database used included 24,000+ individual securities from the NYSE, AMEX & NASDAQ exchanges. Coverage spanned from January-1983 to December-2004. 

Survivorship bias The database used for this project included historical data for all stocks that were delisted at some point between 1983 and 2004. Slightly more than half of the database is comprised of delisted stocks. 

Corporate actions All stock prices were proportionately back adjusted for corporate actions, including cash dividends, splits, mergers, spin-offs, stock dividends, reverse splits, etc. Realistic investable universe A minimum stock price filter was used to avoid penny stocks7 . 

A minimum daily liquidity filter was used to avoid stocks that would not have been liquid enough to generate realistic historical results from. Both filters were evaluated for every stock and for every day of history in the database, mimicking how results would have appeared in real time.>>>

The data vendor they used has these prices listed:

PowerST will run on any Windows computer.

Cost:

The cost of PowerST is:

Initial Purchase: $25,000

Monthly Maintenance: $1,000

Calculation Engine Source Code: $100,000

Does anyone know of an economical source for at least merger and delisted data is accounted for:)?

This site has delisted symbols so long as they are not reused ex:http://www.eoddata.com/StockQuote/NYSE/LEH.htm 

Jul

11

Alex Castaldo writes: 

Heres the skinny. from math puzzles volume 1, by  presh talwalkar. doc here. from nature walk. originally to stretch aubrey's mind . odds of a comebak victory

Consider 2 teams a and b that are completely evenly matched. given that a team is behind in score at half time, what is the prob that a team will overcome the deficit and win the game. assume the first halve and the second half are taken to be independent events. Presh solves it as follows logically:

Since the two teams are evenly matched, it is equally likely that the team will score enuf points to overcome the deficit or that it will not score enuf points. fo example the event of falling behind 6 pts in a half game happens with the same prob as gaining 6 pts in a half game. He concludes prob is 0.25

Now we posted the empirical resutls from basektaball games and many others have given the empiriclal results for football games … and i gave some results for the markets.. this seems to be of interest to everyone , had the most views of any posts, and it was good for 7 or 8 points today.. lets have your discussion and solution of this problem. presh says the answer is 0.25 both empirically (NFL in 1995) and logically.

Jared Albert writes: 

In a game with two teams where in the first round, the team 1 advantage varies from flat to all the points available in the second round, the probability of  team 0 coming from behind to win are in array with 20 available points in the second round:

[0.49, 0.306, 0.22, 0.129, 0.09, 0.03, 0.018, 0.011, 0.004, 0.002, 0.002, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]

For example, if the teams are even going into the second round with 20 available points, .490 chance that team0 wins; with a one point advantage to team1 at the start of round2, team0 wins .306 of the time;

2 points to team1, team0 wins .220 of the time etc

Here's the montecarlo:

import numpy as np

np.random.seed(10)

out_list = []

out_list = []

count = 1000

win = 1

lose = 0

team0_start = 0

team1_start = 0

size=20

def runs():

z = np.sum(np.random.choice([win, lose], size=size, replace=True, p=None))

return z

def outcome(team1_start, count = count, team0_start=team0_start):

 l= []

 for _ in range(count):

 team0_end = runs() + team0_start 

 team1_end = runs() + team1_start 

 came_from_behind = team0_end > team1_end 

 l.append(came_from_behind)

    #print(f'l: {l}')

    outcome = sum(i > 0 for i in l)

    return(outcome)

for i in range(size):

out_list.append(outcome(team1_start=i)/count)

print(f'outlist: {out_list}') 

Victor Niederhoffer writes: 

up your alley i  think. we have done something similar for market with real empirical results. the  unconditional prob is much less than20%

Stephen Stigler writes: 

I am sure you know but I repeat anyway:

1) the simple calculations ignore correlation between teams.

2) they also ignore information on the distribution of changes

3) Calculations using the distribution of changes are not hard.

4) But the information about the probability of extreme events is not well determined so they can be inaccurate

5) In any case  markets unlike sports are not zero sum games.

Jan

11

Motivated by the Saudi Aramco IPO, this study tries to answer the question: does the broad market rise into big IPO's (and then sell off after)? This is based on the theory that there is an effort to boost the market before the IPO to benefit the new issue ecosystem.

I'm surprised, but there doesn't seem to be an effect as the chart shows the usual upward drift. I took the 25 biggest US IPOs by proceeds and graphed the mean of LN changes of the SPY to the IPO date of the 20 trading days before and the 20 trading after.

LN changes 20 days before to IPO date

count    25.000000

mean     -0.009702

std       0.051080

min      -0.125424

25%      -0.029615

50%      -0.013882

75%       0.020232

max       0.075516

 

LN changes from IPO date to 20 days after

 

count    25.000000

mean      0.004976

std       0.032097

min      -0.069427

25%      -0.008863

50%       0.001278

75%       0.024553

max       0.065406

IPO data is from here

SPY date is from here

script is here.

Aug

13

A case study in multiple comparisons and a warning against using cart for market prediction:

"Exercising for 90 Minutes Or More Could Make Mental Health Worse, Study Suggests"
by Sarah Knapton, Science Editor

Steve Ellison writes: 

A statement by Mark Hulbert in Sunday's Wall Street Journal raised my suspicions. He said that the percentage of household financial assets invested in stocks had an R-squared of 61% since 1954 in forecasting the net change of the S&P 500 over the next 10 years.

There have only been 6 non-overlapping 10-year periods since 1954. I have not gotten around to getting the data for household financial assets, but how could any factor possibly have an R-squared of 61% with any significance after 6 observations?

I will grant that the indicator makes some intuitive sense from the perspectives of "copper[ing] the public play" and waiting to buy until the old men are hobbling on canes, but I question the statistics.

Link and relevant excerpt below:

The most accurate of the indicators I studied was created by the anonymous author of the blog Philosophical Economics. It is now as bearish as it was right before the 2008 financial crisis, projecting an inflation-adjusted S&P 500 total return of just 0.8 percentage point above inflation. Ten-year Treasurys can promise you that return with far less risk.
Bubble flashbacks
The only other time it was more bearish (during the period since 1951 for which data are available) was at the top of the internet-stock bubble.
The blog’s indicator is based on the percentage of household financial assets—stocks, bonds and cash—that is allocated to stocks. This proportion tends to be highest at market tops and lowest at market bottoms.
According to data collected by Ned Davis Research from the Federal Reserve, this percentage currently looks to be at 56.3%, more than 10 percentage points higher than its historical average of 45.3%. At the top of the bull market in 2007, it stood at 56.8%.
Ned Davis, the eponymous founder of Ned Davis Research, calls the indicator’s record “remarkable.” I can confirm that its record is superior to seven other well-known valuation indicators analyzed by my firm, Hulbert Ratings.
To figure out how accurate an indicator has been, we calculated a statistic known as the R-squared, which ranges from 0% to 100% and measures the degree to which one data series explains or predicts another.
In this case, zero means that the indicator has no meaningful ability to predict the stock market’s returns after inflation over the next 10 years. On the other hand, a reading of 100% would mean that the indicator is a perfect predictor.
Since 1954, according to our analysis, the Philosophical Economics indicator had an R-squared of 61%. In the messy world of stock-market prognostication, that is statistically significant. Our analysis begins in that year because that is the earliest date for which data are available for all of the other indicators that we studied.

Jared Albert writes: 

As I understand the statement, the R**2 is generated from the correlation between the end of one ten year period and the end of the other.

Is this a fair model:
1) Use the annual returns for the SP500 for the period 1954-2014 broken in the 6 decade buckets.
2) Use the standard deviation of returns for each of those 10 years periods (STD calculated on only 10 yearly values for simplicity).
3) Generate a random return value from a normal distribution for the end year of each period
4) repeat the above for cash and bonds
5) create the portfolio ratio of stocks:bonds:cash
6) calculate the r**2 value between every 10 year period for stocks
7) do this 1000 times and calculate the summary stats for the R**2

Is this the way to build the model? I may do this later, if I can quickly find the cash and bond return. Thank you,

May

16

 "I understand your here to collect your share?" said the Keeper.

"My share of the taxes, yes," said the Visitor, "Piketty sent me."

"Are you sure you want to tax capital? I mean, really sure?" said the Keeper.

"It's only fair," said the Visitor.

"Well, to register and receive you must put on this headset," said the Keeper, handing over a kind of halo object, "it will read your Identity Number, calculate your distribution and begin making a fair deposit."

"Perfect!" said the Visitor, and popped the contraption onto his head. The Keeper stared at him directly, a thin smile on his lips.

The Visitor pressed the power button on the halo. "Aaaah! No, please. What." The Visitor spasmed wildly. "Aaargh! Oh my God! Please, please." The Visitor's flight reflex kicked in, his muscles began to shake violently, bringing him to his knees. The tension in his bladder collapsed and piss soaked his pants. The Visitor writhed on the floor, "MAKE IT STOP! What is this?!"

The Keeper quickly pulled a handset from his pocket and clicked the interrupt. Nobody so far had completed the deposit in full. The Visitor fell to the floor, exhausted. With his eyes blood shot, watering, the Visitor cried out, "how dare you, what was that torture? You fiend! This is criminal."

"You asked for your share," said the Keeper, "and your bank account is in credit now. Your share of the capital taxes have been delivered, proportionately."

"Are you some kind of SICKO?" screamed the Visitor.

"No. You see, you asked for your fair share. We decided in transferring capital taxes, we should also make an additional deposit to keep it balanced. We gave you a concentrated dose of every sleepless night, strained relationship, cheating business partner, every lie heard, every deal that didn't close, every set-back, every busted asset, every temptation skirted, idea stolen, regulatory intervention, bankrupt supplier, every loss adjusted insurance policy, every giant competitor… all of it. And there's much, much more. Should I complete the deposit?" asked the Keeper.

The Visitor staggered up to his feet, raised his eyes to the Keeper and paused to speak. But nothing came. Instead, he ran straight for the door.

Jared Albert writes:

I think the basic problem with Piketty style wealth redistribution is that everyone wants to read poetry, while no one wants to take out the trash.

That effort is often necessary for wealth, doesn't answer his basic point that in a fairer world we'd help those who strove and failed as well.

Victor Niederhoffer writes: 

Yes, Mr. Albert has encapped the idea that has the world in its grip. When I played ball, I always wished that my opponents would share their points when they beat me. There should have been a law. 

Jared Albert replies: 

A lot of effort has gone down dead ends in battery technology. Those efforts uncovered what doesn't work, and provided methods that may end up pushing some methods forward. Those failures benefit all of us.

According to an ideal Piketty model, the losers should be compensated in some form by winners as they helped move the sum of the effort forward.

I don't know for sure obviously, but I doubt you can find a nobel laureate who doesn't feel that they stood on the shoulders of others.

My point is that in general people are dis-incetized to try any of the routes if their reward has nothing to do with effort.

Stefan Jovanovich writes: 

In a fairer world we do help those who strive and fail; that is how successful teams (right now and for the past 5 seasons under Bruce Bochy, the SF Giants) and families (the anonymous R-Man's to take one of many examples from the List) and enterprises all work. As with most Leftist ideas Piketty has a valid complaint; as with all ideas based on the sacrifice of individual freedoms for collective good his Marxist solution is catastrophically bad. Some people do want to take out the trash rather than let it pile up, but no one does it for very long for the sake of strangers without getting paid in money that he or she gets to keep and spend. That is why inventive and naturally poetic people in Cuba live in a world of uncollected trash and free medical care where the patients bring the medicines to the doctors. But it is fair — everyone lives under the same collective incentive to read official poetry.
 

Sep

26

My grandfather Martin was a language genius who spoke about 30 languages. He was court interpreter at the start by faking that he knew Yiddish and Russian when he was waiting around bankruptcy court for real estate to buy without cash. He needed the 5$ he got from the gigue to pay for ice skating lessons for Artie. Artie on his 40th birthday gave himself a special present. He bought himself his first tennis lesson that he could afford. A $3 lesson with Phil Rubell. The grandfather was very acerbic and was chagrined that the court clerks got double the salary of the interpreters. So at 68 he took the court clerks test, and got the second highest mark. In any case, at a time like this, he liked to say, "in their quiet way, stocks have arabesqued down 30 big points (S&P), and I think the path of least resistance is back above the round number".

And on a day like this Birdie his wife, who he proposed to the first time she bent over and took stenography for him, "I have to know now or I'll never ask you again", (her job was silent movie pianist, and that helped her in the stenography), liked to say "Martie, I see the market is way up— you look mad. I hope you weren't, how do you say it —- 'short'". 

Jared Albert writes: 

I imagine that the readers of this site could put together quite a few wonderful comments that significant other's make about one's troubles trading.

My wife told me to not be an idiot and just double my size as I was already sitting there, when I suggested that I would split the account and teach her to daytrade.

She also likes to assure me that there is strong support at zero when she see that the market is down.
  

Archives

Resources & Links

Search