boooo

Yak

 From a fitness magazine, probably Muscle and Fitness, circa 1987: Set goals for yourself. Make them believable and clearly defined. Write them down every morning, vividly imagining those goals throughout the day. Once achieved, create another. Do not put this off. Avoid negative people. They put junk into your subconscious. You cannot help them unless they are ready to be helped. They ar rarely ready. Actively seek positive people. Make it a point to spend more time with them. Positive people are gems. Cherish every one you meet. Eat extremely well. Maximum nutrition levels create greater strength, tissue growth, mental clarity and a superior attitude. Increase your self-discipline. Resist the urge to become comfortable with your efforts. Fight against laziness. Stay hungry. Learn to delay gratification. Do what must be done now, rather than what you like to do. Never make the mistake of serving two desires. You can only serve one. Consider spiritual values. Research this area closely. It can profoundly affect your attitude. Approach it with an open mind. Use time wisely. Do the most productive thing possible at every given moment. Learn from mistakes. Analyze why they happened. Use them as tools for growth. Develop a sense of humor. Don't always take everything so seriously.

The Five Buddhist Precepts:

I undertake the training rule to abstain from killing.

I undertake the training rule to abstain from taking what is not given.

I undertake the training to avoid sensual misconduct.

I undertake the training rule to abstain from false speech.

I undertake the training rule to abstain from fermented drink that causes heedlessness.

In [1]:
# -*- coding: utf-8 -*-
"""
Created on Sun Jul 12 16:28:46 2020

@author: dave
"""
import pandas as pd

#https://ourworldindata.org/coronavirus
covid_data = pd.read_csv('owid-covid-data.csv',
parse_dates=['date'],
usecols=['iso_code','continent','location',
'date','total_cases_per_million', 'total_deaths_per_million',
'total_tests_per_thousand','stringency_index', 'population', 'population_density' ])

#https://ourworldindata.org/urbanization#share-of-populations-living-in-urban-areas
urbanization = pd.read_csv('share-of-population-urban.csv',
names=['iso_code', 'Year', 'urbanization'],
header=0)
urbanization.reset_index(drop=True, inplace=True)

# get rid of countries with unusable iso_codes
covid_data.dropna(subset=['iso_code', 'population_density'], inplace=True)
covid_data = covid_data[covid_data['iso_code']!='OWID_WRL']
covid_data = covid_data[covid_data['iso_code']!='OWID_KOS']
covid_data = covid_data[covid_data['iso_code']!='']

# for each country pick the row with the most
# recent date. covid_data contains all the data,
# country_list is the most recent subset of covid data
country_groups = covid_data.groupby('location')
row_indeces = country_groups['date'].idxmax()
row_indeces = row_indeces.values

# indeces from country_groups
country_list = covid_data.loc[row_indeces]

#print('country_list: \n', country_list.iloc[1])

# do the same for urbanization
urbanizationization_groups = urbanization.groupby('iso_code')
row_indeces = urbanizationization_groups['Year'].idxmax()
row_indeces = row_indeces.values
urbanization_list = urbanization.loc[row_indeces]
urbanization_list = urbanization_list.drop(columns=['Year'])
urbanization_list = urbanization_list[urbanization_list['iso_code']!='OWID_WRL']
urbanization_list = urbanization_list[urbanization_list['iso_code']!='OWID_CIS']

#
country_list = country_list.merge(urbanization_list,
on=['iso_code'],
how='inner')

# sorted ascending index
country_list = country_list[country_list['total_deaths_per_million']!=0]
country_list = country_list.dropna(axis='index', subset=['total_deaths_per_million'])

#print(cimport numpy as npountry_list[['iso_code','total_deaths_per_million'] ])#['total_deaths_per_million'])  #.head(5))  #['iso_code']['total_deaths_per_million'])

# https://matplotlib.org/3.2.2/gallery/mplot3d/surface3d.html#sphx-glr-gallery-mplot3d-surface3d-py
# https://matplotlib.org/3.2.2/gallery/color/colorbar_basics.html#sphx-glr-gallery-color-colorbar-basics-py

import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(2, 1)
#plt.figure(figsize=(30,20))
plt.subplots_adjust(hspace=0.4)

ax1.set_ylabel('population density')
ax1.set_xlabel('total_deaths_per_million')

country_list.sort_values(axis=0,
by='total_deaths_per_million',
inplace=True,
ascending = False,
ignore_index=True)

# label the deadliest countries and set up colors (country_list is sorted in
# descending order on total_deaths_per_million)

rows, columns = country_list.shape
max_index = rows - 1

# note that color1 will presume that the data is sorted on
# total_deaths_per_million, descending
#colors1 = ['royalblue' for i in range(rows)]
country_list['colors'] = ['royalblue' for i in range(rows)]
country_list['alpha'] = [0.4 for i in range(rows)]

# get column number for 'colors'
colors_loc = country_list.columns.get_loc('colors')
alpha_loc = country_list.columns.get_loc('alpha')
for i in range(3):
country_list.iloc[i,colors_loc] = 'red'
country_list.iloc[i,alpha_loc] = 1.0
country=country_list.iloc[i,2]
xy1 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['population_density'].iloc[i]))
xy2 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['urbanization'].iloc[i]))
#print(country, xy1, xy2)
ax1.annotate(s= country,
xy=xy1,
xytext =(xy1[0]-80, xy1[1] + 5000),
arrowprops={'arrowstyle':'->'})
ax2.annotate(s= country, xy=xy2,
xytext =(xy2[0]-80, xy2[1] - 30),
arrowprops={'arrowstyle':'->'})

# label the least deadly (country_list is sorted in
# descending order on total_deaths_per_million)

displacement = 0.5
for i in range(max_index, max_index-2, -1):
country_list.iloc[i,colors_loc] = 'red'
country_list.iloc[i,alpha_loc] = 1.0
country=country_list.iloc[i,2]
xy1 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['population_density'].iloc[i]))
xy2 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['urbanization'].iloc[i]))
#print(country, xy1, xy2)
ax1.annotate(s= country,
xy=xy1,
xytext =(xy1[0]+200 - 50*displacement, xy1[1] + displacement*4000),
arrowprops={'arrowstyle':'->'})
ax2.annotate(s= country, xy=xy2,
xytext =(xy2[0]+100, xy2[1] + 10),
arrowprops={'arrowstyle':'->'})
displacement = displacement+1

# annotate the highest population density points in ax1
country_list.sort_values(axis=0,
by='population_density',
inplace=True,
ascending = False)
for i in range(2):
country_list.iloc[i,alpha_loc] = 1.0
xy1 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['population_density'].iloc[i]))
country=country_list.iloc[i,2]
ax1.annotate(s= country_list.iloc[i,2],
xy=xy1,
xytext=(xy1[0]+50 , xy1[1]-5000 + i*8000),
arrowprops={'arrowstyle':'->'})
country_list.iloc[i,colors_loc] = 'red'
xy2 = (float(country_list['total_deaths_per_million'].iloc[i]),
float(country_list['urbanization'].iloc[i]))
ax2.annotate(s= country, xy=xy2,
xytext =(xy2[0] + 150, xy2[1]-10-i*15),
arrowprops={'arrowstyle':'->'})

#fig, (ax1, ax2, ax3) = plt.subplots(figsize=(13, 3), ncols=3)
#plt.scatter(country_list['population_density'], country_list['total_deaths_per_million'])
#ax1.hexbin(country_list['population_density'], country_list['total_deaths_per_million'],
#          gridsize=(10,10))
plt.suptitle('Urbanization vs. Population Density in Driving COVID Death Rate')

#ax2.hexbin(country_list['population_density'],
#           country_list['urbanization population (% of total) (% of total)'],
#          gridsize=(10,10))
ax2.set_xlabel('total_deaths_per_million')
ax2.set_ylabel('urbanization population \n (% of total)')

colors_loc = country_list.columns.get_loc('colors')
alpha_loc = country_list.columns.get_loc('alpha')
total_loc = country_list.columns.get_loc('total_deaths_per_million')
urban_loc = country_list.columns.get_loc('urbanization')
pop_loc = country_list.columns.get_loc('population_density')
for i in range(max_index):
ax1.scatter(
country_list.iloc[i, total_loc],
country_list.iloc[i, pop_loc],
marker='.', c=country_list.iloc[i, colors_loc],
alpha=country_list.iloc[i, alpha_loc])
ax2.scatter(country_list.iloc[i, total_loc],
country_list.iloc[i, urban_loc],
marker='.', c=country_list.iloc[i, colors_loc],
alpha=country_list.iloc[i, alpha_loc])


# Assignment 3 - Building a Custom Visualization¶

In this assignment you must choose one of the options presented below and submit a visual as well as your source code for peer grading. The details of how you solve the assignment are up to you, although your assignment must use matplotlib so that your peers can evaluate your work. The options differ in challenge level, but there are no grades associated with the challenge level you chose. However, your peers will be asked to ensure you at least met a minimum quality for a given technique in order to pass. Implement the technique fully (or exceed it!) and you should be able to earn full grades for the assignment.

Ferreira, N., Fisher, D., & Konig, A. C. (2014, April). Sample-oriented task-driven visualizations: allowing users to make better, more confident decisions.       In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 571-580). ACM. (video)

In this paper the authors describe the challenges users face when trying to make judgements about probabilistic data generated through samples. As an example, they look at a bar chart of four years of data (replicated below in Figure 1). Each year has a y-axis value, which is derived from a sample of a larger dataset. For instance, the first value might be the number votes in a given district or riding for 1992, with the average being around 33,000. On top of this is plotted the 95% confidence interval for the mean (see the boxplot lectures for more information, and the yerr parameter of barcharts).

#### Figure 1 from (Ferreira et al, 2014).

A challenge that users face is that, for a given y-axis value (e.g. 42,000), it is difficult to know which x-axis values are most likely to be representative, because the confidence levels overlap and their distributions are different (the lengths of the confidence interval bars are unequal). One of the solutions the authors propose for this problem (Figure 2c) is to allow users to indicate the y-axis value of interest (e.g. 42,000) and then draw a horizontal line and color bars based on this value. So bars might be colored red if they are definitely above this value (given the confidence interval), blue if they are definitely below this value, or white if they contain this value.

#### Figure 2c from (Ferreira et al. 2014). Note that the colorbar legend at the bottom as well as the arrows are not required in the assignment descriptions below.

Easiest option: Implement the bar coloring as described above - a color scale with only three colors, (e.g. blue, white, and red). Assume the user provides the y axis value of interest as a parameter or variable.

Harder option: Implement the bar coloring as described in the paper, where the color of the bar is actually based on the amount of data covered (e.g. a gradient ranging from dark blue for the distribution being certainly below this y-axis, to white if the value is certainly contained, to dark red if the value is certainly not contained as the distribution is above the axis).

Even Harder option: Add interactivity to the above, which allows the user to click on the y axis to set the value of interest. The bar colors should change with respect to what value the user has selected.

Hardest option: Allow the user to interactively set a range of y values they are interested in, and recolor based on this (e.g. a y-axis band, see the paper for more details).

Note: The data given for this assignment is not the same as the data used in the article and as a result the visualizations may look a little different.

In [1]:
# Use the following data for this assignment:

#%matplotlib inline

import pandas as pd
import numpy as np
import math

np.random.seed(12345)

# mean, std, size

df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])

sqr_n = math.sqrt(3650)

df = df.transpose()

df_stat = df.describe()

df_means = df_stat.loc["mean"]
df_errors = df_stat.loc["std"]
df_SEM = df_errors/sqr_n

year_labels = df_means.index
numpy_years = year_labels.to_numpy()
numpy_SEM = df_SEM.to_numpy()
numpy_data = df_means.to_numpy()


In [2]:
# light blue if below
# white if inside
# red if above

def pick_colors(numpy_years, numpy_SEM, numpy_data, y_value ):

colors = []
for item in range(0,len(numpy_SEM)):
#print(numpy_years[item], numpy_SEM[item], numpy_data[item])
#print('range: ', numpy_data[item]-numpy_SEM[item], '-',numpy_data[item]+numpy_SEM[item], y_value ,'\n')
if y_value < (numpy_data[item]-numpy_SEM[item]):
#print('firing blue: ', numpy_data[item]-numpy_SEM[item],'>',y_value)
color = '#add8e6'  #pale blue
color = 'tomato'
elif numpy_data[item]-numpy_SEM[item] < y_value and y_value < numpy_data[item]+numpy_SEM[item]:
color = 'white'
#print('firing white')
else:
color = 'red'
#print('firing red')
color = '#add8e6'  #pale blue

#print(color)
colors.append(color)
#print()
return colors

#y_value = 40234
#print(pick_colors(numpy_years, numpy_SEM, numpy_data, y_value ))

In [3]:
import logging
logging.basicConfig(format='%(asctime)s  %(funcName)s %(lineno)d: %(message)s',
datefmt='%m/%d/%Y %I:%M:%S %p',
filename='aa-logs.log',
filemode='w',
level=logging.CRITICAL)

In [4]:
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.widgets import TextBox
import matplotlib.lines as lines
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator,LogFormatterSciNotation )

y_init = 30000
colors = pick_colors(numpy_years, numpy_SEM, numpy_data, y_init )

# return the figure and axes
fig, ax = plt.subplots()
fig.subplots_adjust(bottom=0.2)

## do the bar chart
edgecolors = ['black','black','black','black']
bar_chart = ax.bar(x=numpy_years,
height=numpy_data,
yerr=numpy_SEM,
color=colors,
width=1.0,
edgecolor=edgecolors)

# reduce the number of xticks
xticks = ax.get_xticks()
xticks = xticks[0::2]
ax.set_xticks(xticks)

ax.set_ylim([0,50000])

## do the line
x = [1991, 1996]
y = [y_init, y_init]
line, = ax.plot(x,y, 'r--', label='y value')
plt.title('Bars colored red if they are definitely above given value;\n\
blue if they are definitely below given value; \n\
white if they contain given value.')
plt.subplots_adjust(top=0.85)

def submit(text):
logging.info(text)
ydata = eval(text)
colors = pick_colors(numpy_years, numpy_SEM, numpy_data, ydata )
logging.info(colors)
line.set_ydata(ydata)

counter = 0
for bar in bar_chart:
logging.info(' Patch number: {}'.format(counter) )
bar.set_color(colors[counter])
bar.set_edgecolor('black')
counter = counter + 1
ax.relim()
ax.autoscale_view()
_ = plt.draw()

# do the text box
axbox = fig.add_axes([0.1, 0.05, 0.8, 0.075])
label = 'Enter y-value:'
text_box = TextBox(axbox, label, initial=y_init)
text_box.on_submit(submit)
# could not get textbox to be interactive with out this call.
text_box.set_val(y_init)


In [ ]:

In [ ]:

In [ ]:

Assignment2 Weather-Dates on X Axis - Submitted

# Assignment 2¶

Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.

An NOAA dataset has been stored in the file data/C2A2_data/BinnedCsvs_d400/fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89.csv. This is the dataset to use for this assignment. Note: The data for this assignment comes from a subset of The National Centers for Environmental Information (NCEI) Daily Global Historical Climatology Network (GHCN-Daily). The GHCN-Daily is comprised of daily climate records from thousands of land surface stations across the globe.

Each row in the assignment datafile corresponds to a single observation.

The following variables are provided to you:

• id : station identification code
• date : date in YYYY-MM-DD format (e.g. 2012-01-24 = January 24, 2012)
• element : indicator of element type
• TMAX : Maximum temperature (tenths of degrees C)
• TMIN : Minimum temperature (tenths of degrees C)
• value : data value for element (tenths of degrees C)

For this assignment, you must:

1. Read the documentation and familiarize yourself with the dataset, then write some python code which returns a line graph of the record high and record low temperatures by day of the year over the period 2005-2014. The area between the record high and record low temperatures for each day should be shaded.
2. Overlay a scatter of the 2015 data for any points (highs and lows) for which the ten year record (2005-2014) record high or record low was broken in 2015.
3. Watch out for leap days (i.e. February 29th), it is reasonable to remove these points from the dataset for the purpose of this visualization.
4. Make the visual nice! Leverage principles from the first module in this course when developing your solution. Consider issues such as legends, labels, and chart junk.

The data you have been given is near Ann Arbor, Michigan, United States, and the stations the data comes from are shown on the map below.

In [1]:
#%matplotlib notebook

import matplotlib.pyplot as plt
import matplotlib as mpl
#mpl.get_backend()
import matplotlib.dates as mdates

import pandas as pd
import numpy as np
import datetime as dt

data_file = "fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89.csv"
raw_data = pd.read_csv(data_file)
raw_data.to_csv("raw_data.csv")

#print(raw_data.describe())
#print(raw_data.head())

# take the inverse of the mask to delete all leap days
raw_data = raw_data[~raw_data['Date'].str.contains("02-29")]

# separate 2005-2014 data and 2015 data
data_5_14 = raw_data[raw_data['Date'] < '2015-01-01' ]
data_2015 = raw_data[raw_data['Date'] > '2014-12-31']
#print(data_2015.head())

# separate TMAX data from TMIN data
max = data_5_14[data_5_14['Element']=='TMAX']
#max.to_csv('max.csv')
min = data_5_14[data_5_14['Element']=='TMIN']

# find max and min values for each date from among the
# measuring stations
max = max.groupby(['Date']).max()
min = min.groupby(['Date']).min()
data_2015_max = data_2015.groupby(['Date']).max()
data_2015_min = data_2015.groupby(['Date']).min()
data_2015 = data_2015_max.append(data_2015_min)
#print('new data_2015: \n', data_2015.describe())
#max.reset_index(inplace=True)
#max.to_csv('max_after_groupby.csv')

# drop unnecessary columns
max = max.drop(columns=['ID','Element'])
min = min.drop(columns=['ID','Element'])
data_2015.drop(columns=['ID','Element'])

# get the absolute highs and lows
max_high_temp = max.max()[0]
min_low_temp = min.min()[0]
#print('max_high_temp: ', max_high_temp)
#print('min_low_temp: ', min_low_temp)

# remove 20115 data between max_high_temp and min_low_temp
data_2015 = data_2015[(data_2015['Data_Value'] > max_high_temp)
| (data_2015['Data_Value'] < min_low_temp)]

#print("for scatter plot: \n",data_2015)

# prepare the dataframes for plotting
max = max.reset_index()
min = min.reset_index()
data_2015 = data_2015.reset_index()

#print('max: \n', max.describe())
#print('min: \n', min.describe())
#print('data_2015: \n', data_2015.describe())

max_values = max['Data_Value'].to_numpy()
max_values = max_values / 10
max_dates = max['Date'].to_numpy()

# convert np array of string dates to list of datetime.dates
max_dates = \
[dt.datetime.strptime(d,'%Y-%m-%d').date() for d in max_dates]

#print(max_dates)
#print(type(max_dates))
min_values = min['Data_Value'].to_numpy()
min_values = min_values / 10
min_dates = min['Date'].to_numpy()

# convert np array of string dates to list of datetime.dates
min_dates = \
[dt.datetime.strptime(d,'%Y-%m-%d').date() for d in min_dates]

print("888888888", max_dates[0], max_dates[-1])
data_2015_values = data_2015['Data_Value'].to_numpy()
data_2015_values = data_2015_values / 10
data_2015_dates = data_2015['Date'].to_numpy()

print("******", type(data_2015_dates[0]))

#
# First Axes.....
#

first_axes = plt.gca()
x = first_axes.xaxis

# rotate the tick labels for the x axis
for item in x.get_ticklabels():
item.set_rotation(-45)

first_axes.set_xlabel('\n Daily Highs and Lows 2004 through 2014')
first_axes.set_ylabel('Temperature (C)')
first_axes.set_title('2015 Temperatures that Exceed Maximum Highs and Lows from \
2004 through 2014 \n')

#first_axes.set_xlim(left=min_dates.min(), right=min_dates.max())

first_axes.xaxis.set_major_formatter(mdates.DateFormatter('%m/%d/%Y'))
first_axes.xaxis.set_major_locator(mdates.AutoDateLocator())
first_axes.set_ylim(top=70, bottom=-40)

plt.subplots_adjust(bottom=0.25, top=0.7)
f_axes_max_line, = first_axes.plot(max_dates,
max_values,
'r',
label='Maximum',
alpha=0.5,
linewidth=0.5)

f_axes_min_line, = first_axes.plot(min_dates,
min_values,
'blue',
label='Minimum',
alpha=0.5,
linewidth=0.5)

first_axes.fill_between(max_dates, max_values, min_values, facecolor='blue', alpha=0.25)

plt.subplots_adjust(bottom=0.25, top=0.55)
#
# build second axes
#

second_axes = first_axes.twiny()
second_axes.set_xlabel("2015 Temps Exceeding 2004 to 2014 Highs and Lows\n")

second_axes.xaxis.set_major_formatter(mdates.DateFormatter('%m/%d/%Y'))
second_axes.xaxis.set_major_locator(mdates.DayLocator())

left = mpl.dates.datestr2num('2015-02-01')
right = mpl.dates.datestr2num('2015-02-28')
#left = mpl.dates.datestr2num('2015-01-01')
#right = mpl.dates.datestr2num('2015-12-31')
second_axes.set_xlim(left=left, right=right)
# rotate the tick labels for the upper x axis

# rotate the axis labels
top_x = second_axes.xaxis
for item in top_x.get_ticklabels():
item.set_rotation(315)

# reduce the number of xticks to every other xtick in February
xticks = second_axes.get_xticks()
xticks = xticks[0::2]
second_axes.set_xticks(xticks)

#print(data_2015_dates)
#print(type(data_2015_dates[0]))
data_2015_dates_num = mpl.dates.datestr2num(data_2015_dates)
print(data_2015_dates[0])
print(type(data_2015_dates[0]))

'''
second_axes.annotate(s='some string',
xy=(data_2015_dates[0],data_2015_values[0]),
xytext=(-10.0,0.0),
textcoords='offset points',
ha='right')
'''
s_axis_scatter = second_axes.scatter(
data_2015_dates_num,
data_2015_values)

plt.legend([f_axes_max_line,f_axes_min_line, s_axis_scatter],
("2004/14 Max Temps",'2004/14 Min Temps','2015 Extremes'),
loc='upper left',
fontsize='small')

fig = plt.gcf()
fig.set_size_inches(10, 10)
#plt.gcf().autofmt_xdate()

#host.yaxis.get_label().set_color(p1.get_color())
#leg.texts[0].set_color(p1.get_color())

#par.yaxis.get_label().set_color(p2.get_color())
#leg.texts[1].set_color(p2.get_color())


888888888 2005-01-01 2014-12-31
****** <class 'str'>
2015-02-20
<class 'str'>


In [ ]:

In [ ]:

13 Weeks to Go

RECOVERY RUNNRC Guided Run: Just A Run30:00 Recovery Run

RECOVERY RUNNRC Guided Run: Recovery Run with Headspace35:00 Recovery Run

SPEED RUNNRC Guided Run: Stronger FasterIntervals5:00 Warm Up3:00 5K Pace4 x 0:30 Mile PaceInterval series should be done 3 x’s2:00 Recovery after 5K Pace1:00 Recovery after Mile Pace

RECOVERY RUNNRC Guided Run: A Cold Run45:00 Recovery Run

LONG RUNNRC Guided Run: Ten Mile Run16K/10 Mile Run

12 Weeks to Go

RECOVERY RUNNRC Guided Run: Just Another Run35:00 Recovery Run

RECOVERY RUNNRC Guided Run: Morning Run with Headspace30:00 Recovery Run

PEED RUNNRC Guided Run: Tempo Run with Paula RadcliffeTempo Run7:00 Warm Up20:00 Tempo Run

RECOVERY RUNNRC Guided Run: Running in the Dark48:00 Recovery Run

LONG RUNNRC Guided Run: 15K Run15K/9.32 Mile Run

11 Weeks to Go

RECOVERY RUNNRC Guided Run: Suckcess Run35:00 Recovery Run

RECOVERY RUN NRC Guided Run: End of the Day Run with Headspace25:00 Recovery Run

The purpose of this study was to compare age and gender effects of strength training (ST) on resting metabolic rate (RMR), energy expenditure of physical activity (EEPA), and body composition.

Basal metabolic rate (BMR) is the largest component of daily energy demand in Western societies. Previous studies indicated that BMR is highly variable, but the cause of this variation is disputed. All studies agree that variation in fat-free mass (FFM) plays a major role, but effects of fat mass (FM), age, sex, and the hormones leptin, triiodothyrionine (T3), and thyroxine (T4) remain uncertain.

The scenario plays out time and again: last year’s clothes fit more snugly and the number on the scale reads higher. The mindless munching, countless nights out drinking and partying, long hours at a job that takes away from time to work out, winter weight gain (did you know the average person gains a little over one pound between September and February?), and forgotten New Year’s resolutions (for most, the winter weight gain is never lost) have taken their toll.

Many people struggle to keep their weight in check as they get older. Now new research at Karolinska Institutet in Sweden has uncovered why that is: Lipid turnover in the fat tissue decreases during ageing and makes it easier to gain weight, even if we don't eat more or exercise less than before. The study is published in the journal Nature Medicine.

9/16 Monday

I'm a little hung over from all the running over the weekend. I overslept to 3:40. This unusual, specially for a week day.  My focus is good, though. I think my playlist music helps, maybe the coffee. I've picked up two techniques from my Iwakuni days that are really paying off again. I'm alternating supinated and pronated grips during my five sets of chin.  This both increases my time between sets of identical movements and gives me a fuller, more satifying stress to my lats. I've also gone back to stretching between sets. It helps keep me focused on my work out and has the added benefit of getting me back into doing some flexibility work. I realize that there is no science suggesting a relationship between lower injury rates from running and stretching. But, for me, as my weekly miles drift above 40, I get impossibly achy without the stretching. It really feels like I'm on the edge of injury. After a couple of weeks' stretching, the aches during my runs have greatly decreased.

Today's the Solstice. Always a time that gives me pause. A big part of my mind needs the light. I love finishing my morning runs bathed in the dawn. I watch the brilliant moon and all her entourage fade before the flood of the dawn. Bats, chasing their evening dinners, yield to the birds at their breakfasts. My day begins at this pinnacle and starts its journey through the hours. I never end at the height where I begin. Likewise, this year has been one progression upwards, out of the winter. This week is my best week of running in two years, possibly ever. Now, the light will begin to withdraw. The shortened day after the Solstice is the first harbinger of Fall. Fall, Winter, Darkness, and Cold are always there to test. This year, I stand up to the Marathon again. It will be an accounting. Fifteen years after my first MCM, eight years after my last. I deluded myself in the years when I told myself that training at or near marathon level was enough. That is like saying "I could have stood" as a substitute for standing. Either you stand and measure yourself or you no longer are. The Darkness comes inevitably. As long as you stand, you are.

Short on time tonight. Going to do 2 sets full body compound barbells with one minute rests. The second set goes to failure. Roughly 2 minutes between exercises. This is a significant increase in tempo. I accomplish substantially less reps in the second set, for upper body movements. My legs seem more immune to the short rest.

My usual routine is 5 sets with 3 minutes rests. I takes rougly 75 minutes. This drops it to less than 30. Total volume today: 12923 lb vice my last complete workout at 26472 lb.

Internet social media has been a series of steps shortening the creation process to post thoughts online. A website is an n-dimensional model of a set of thoughts or abstract concepts. Each page is a static whole and a part of

I don't like Facebook. You get a brief thought, which you can share with a select audience with minimal effort. You can easily attached some underscoring media, such as a photograph or a brief video. You come away with a false sense that you have communicated something to an audience that is actually listening. You gain a feeling of accomplishment, of authorship. Then your communication becomes part of an unsearchable stream of other trivial thoughts that follow in time sequence into near oblivion. Or worse, as a demagogue, you conjure up some half truth or lie, create some slick media with it, and publish it to a focused group of followers who take your post as truth because it is on the Internet, despite the complete lack of any standard of proof, and contribute to the increasing polarization and shallowness of American politics.

But Facebook is just part of the trend. Twitter has reduced communication to 140 characters. We have a president who fits the times. He struggles to be coherent in 140 characters. There can be no complete thoughts in Twitter, only impressions and memes. It is the deconstruction of thought.

Facing the Collegiate Range, we came across this diner. Amazingly, the interior is done in a Caribbean style. It's just a burger and breakfast but it all tasted great in that high altitude mountain air.

Mesa east of Durango, facing north. It's just outside Pagosa Springs.

The Gunnison etched a vast, deep canyon into the Mesa over millions of years. It is called the Black Canyon because, at best, sunlight reaches its bottom for only 32 minutes a day. The local Native Americans avoided it out of superstition.

The Collegiate Range, the second range just west of Colorado's Front Range. While the latter contains Pike's Peak and is home to four 14'ers, the Collegiate Peaks rises off the valley between them with a line of 9 14'ers, another six 13'ers, and is home to the Continental Divide. It is an incredible sight of one massif after another, a wall of massive peaks. Coming upon them going east after Monarch Pass takes the breath away.

When I lift, my mind sets itself against the gray weight in a Sisyphean contest. With each repetition, my Will dominates the weight as I lift it through multiple repetitions and sets. I am not the weight; I control it. Yet, I grow weaker with each lift. The effort is ultimately futile. Each lift is pyrrhic. Gravity and steel ultimately crush Will and flesh. I arrive at the point where another repetition is impossible. My flesh is crushed, but not my Will. I am not the weight. I am the Will that demands that I will return to that lift, stronger than the last time. I am not gravity and weight; I am flesh and Will.

The deadlift is brutally simple in concept. It is a matter of squatting down, gripping a heavy weight, and standing with it. In execution, the deadlift requires fitness, skill, and knowledge to derive maximum benefit from it while minimizing the risk for injury. But nothing tests and grows brute strength like the deadlift. Feet, calfs, quads, glutes, hamstrings, abs, spinal erectors, deltoids, lats, traps, forearm, hands.....and lungs...and mind are all recruited for maximal effort to lift the weight. Nothing tests elan vital like the deadlift.

“Steel is not strong, boy. Flesh is stronger. What is steel compared to the hand that wields it?”

When I run, I float across the earth a homeless interloper. Tiny in the landscape, I become part of the landscape. The wind does not blow against me, it blows through me, it blows with me. I become part of the wind all the while it buffets. My mind ceases to be separate. There is no Will in the endless repetition of steps forward. Mind and Will and body cease to exist separately from Nature. I blend with it. I cease to exist as I float across the landscape, a homeless interloper.

I've said it before. I don't like Facebook....on academic grounds (you can't use a search mechanism to cite what you've said before), on political grounds (it was used to throw an election), on cognitive grounds (it degrades discourse to aphoristic-sized texts that require no thought to produce), on didactic grounds (degraded discourse leads to degraded thinking...which makes you stupid), and on social grounds (it give too big a bullhorn to stupid people). To this, I add another: It is another soulless corporation which will get in bed with the devil to protects profits: https://www.nytimes.com/2018/11/15/technology/facebook-definers-opposition-research.html