Friday, November 30, 2012

Where does Kindle Reader for Mac OS X from the App Store store my books?


Where does Kindle Reader for Mac OS X store my books?


Here:


/Users/yourusername/Library/Containers/com.amazon.Kindle/Data/Library/Application Support/Kindle/My Kindle Content/

Thursday, October 25, 2012

Straight to Voicemail, If Unknown - A simple free method for blocking scam calls and robocalls


Problem: How do I block scam calls and robo calls?

Premise: 

If the call is important, the caller will leave a voicemail.

Solution:


We put every number we know into our caller ID systems.

If a number we do not recognize, or a number that blocks caller ID calls, we always let the call go to voicemail. Callers that really want or need to talk with us will leave a voicemail. If we are not interested, we delete the message.

This process initially upset some of our parents, but we have not had to deal with a robo or scam call in some time, since the calling computers almost never leave a voicemail. Our parents are now used it and leave messages. Sometimes, we pick up as soon as they start talking. Our friends mostly communicate via Facebook, internet chat tools, and email these days, so they are used to asynchronous communication and don't mind leaving a message. The political parties and charities we support do leave messages. We call them back to donate or express our support.

It's simple, effective, and free.

Friday, October 12, 2012

A script to split a file tree into separate trees - one per file extension present in the original tree

Purpose

Have you ever had a tree of files from which you only needed certain types of file? For example, I had an iTunes library with some Apple files from another iTunes account combined with a large number of MP3s. I wanted to pull out the tree of MP3s only. You can make such a tree by passing a combination of flags to rsync that make it do an exclusive include.

How?

Pass the following flags to rsync to make it do an exclusive include for files fitting a certain globbing pattern. Fill in for the variables of course, if you want to use this line alone.

In particular, this rsync line:

rsync -av --include '*/' --include "*.${extension}" --exclude '*' ${source_directory}/ ${top_directory_of_results}/${extension}/

The script:

==========================================================

This tool reads a directory of files that have extensions and then copies each type of file to its own tree.

The location of each file in the subtree matches that file's location in the original tree.

Usage:

 ./split_by_file_extension.sh \
{-s source directory|--source-dir=source directory }\
{-t top directory of results|--top-directory-of-results=top directory of results}\
{-e comma,separated,list,of,extensions | --extensions=comma,separated,list,of,extensions}


==========================================================

#!/bin/bash

set -e
set -u

find_of_files="./find.of.files.$$"

usage () {

 echo "=========================================================="
 echo "This tool reads a directory of files that have extensions"
 echo "and then copies each type of file to its own tree."
 echo ""
 echo "The location of each file in the subtree matches that"
 echo "file's location in the original tree."
 echo ""
 echo "Usage: $0 {-s source directory|--source-dir=source directory} \ "
 echo "          {-t top directory of results|--top-directory-of-results=top directory of results} \ "
 echo "          {-e comma,separated,list,of,extensions | --extensions=comma,separated,list,of,extensions} "
 echo "=========================================================="
}

are_these_the_same_path () {

 original_directory="`pwd`"
 cd "$1"
 first_directory="`pwd`"
 cd "${original_directory}"
 cd "$2"
 second_directory="`pwd`"
 cd "${original_directory}"

 if [ "${first_directory}" = "${second_directory}" ]
 then
  echo true
 else
  echo false
 fi

}

if [ $# -eq 0 ]
then
 usage
 exit 1
fi

needed_number_of_arguments_set=0

while [ $# -gt 0 ]
do
 case $1 in
  -s|--source-dir=*)
   if [ "$1" = "-s" ]
   then
    shift
    source_directory="$1"
    shift
   else
    source_directory="`echo $1| sed s,--source-dir=,,`"
    shift
   fi
   echo "Source Directory: ${source_directory}"
   if [ ! -d ${source_directory} ]
   then
    echo""
    echo "source_directory is not a directory."
    echo ""
    usage
    exit 1
   fi
   needed_number_of_arguments_set="`echo ${needed_number_of_arguments_set} + 1| bc`"
  ;;
  -e|--extensions=*)
   if [ "$1" = "-e" ]
   then
    shift
    extensions="$1"
    shift
   else
    extensions="`echo $1| sed s#--extensions=##`"
    shift
   fi
   echo "Extensions: ${extensions}"
   needed_number_of_arguments_set="`echo ${needed_number_of_arguments_set} + 1| bc`"
  ;;
  -t|--top-directory-of-results=*)
   if [ "$1" = "-t" ]
   then
    shift
    top_directory_of_results="$1"
    shift
   else
    top_directory_of_results="`echo $1| sed s,--top-directory-of-results=,,`"
    shift
   fi
   echo "Target Directory: ${top_directory_of_results}"
   if [ ! -d ${top_directory_of_results} ]
   then
    echo""
    echo "top_directory_of_results is not a directory."
    echo ""
    usage
    exit 1
   fi
   needed_number_of_arguments_set="`echo ${needed_number_of_arguments_set} + 1| bc`"
  ;;
  -h|--help)
   usage
   exit 0
  ;;
  *)
   echo ""
   echo "Unrecognized flag." 1>&2
   usage
   exit 1
  ;;
 esac
done

if [ "${needed_number_of_arguments_set}" -ne "3" ]
then
 echo""
 echo "All of the options must be set." 1>&2
 usage
 exit 1
fi

are_source_directory_and_top_directory_of_results_the_same="`are_these_the_same_path ${source_directory} ${top_directory_of_results}`"

if [ "${are_source_directory_and_top_directory_of_results_the_same}" = true ]
then
 echo ""
 echo "source_directory and top_directory_of_results cannot be the same." 1>&2
 echo ""
 usage
 exit 1
fi

#######################################
#
# Main Process.
#
# Do a find for files.
# Check for files with extensions provided.
# Get directory path for files with listed extensions.
# Make the path for that file on the extension directory in the target directory.
# Copy files from source tree to the specific path in the target tree with rsync. 
#
#######################################

for extension in `echo "${extensions}" | sed s/,/\ /g`
do
  if [ ! -d ${top_directory_of_results}/${extension} ]
  then
     mkdir ${top_directory_of_results}/${extension}
  fi
done

for extension in `echo "${extensions}" | sed s/,/\ /g`
do
  rsync -av --include '*/' --include "*.${extension}" --exclude '*' ${source_directory}/ ${top_directory_of_results}/${extension}/
done

Wednesday, October 3, 2012

Opinion: 21st Century Definition of the Words Up and Down

Since people are starting to go beyond Earth commercially, we need to more specific definitions...

Up

Definition: Away from the gravity well of the dominant local center of gravity.

Down

Definition: Towards the gravity well of the dominant local center of gravity.


Wednesday, September 19, 2012

An algorithm for automatically flagging an unused physical server or virtual machine for retirement

by Adam Keck (incorporating helpful suggestions from Carl Friend and Tyler Yip)

Abstract


If you have a large number of virtual machines and physical servers, you need some method to automatically determine when to retire a machine after it is no longer used.  Otherwise, you have to rely on the business owner of each physical server or virtual machine to tell you when they no longer need the resource.

Below is an algorithm for automatically flagging a physical server or virtual machine for retirement due to lack of usage. In some environments, with proper tuning, this algorithm could even trigger automatic retirement.


Automatic Server Retirement Algorithm (ASRA)

  • Record these data for the lifetime of a server (physical or virtual machine):
    • disk writes and reads
    • network transmits and receives
    • CPU cycles used
  • In the first M days of usage, calculate the arithmetic mean and the RMS for each data type. (M depends on the length of your business cycles). We suggest M=truncate(365.2/4)days (i.e., one quarter year or three months).
  • Calculate the standard deviation from the arithmetic mean for the first M days of usage of each data type.
  • Continue recording daily total values for the above data.
  • Every M days, thereafter, calculate the M-day arithmetic mean of the above data for each of the above data types.
  • Use one of the following methods either to flag the server for retirement (or to automatically retire it).
    • A: Flag the server for removal if the monthly arithmetic mean or RMS for all types of data is less than N% of the first quarter's arithmetic mean or RMS. N is dependent on your site's requirements and server behavior. Carl suggests N=50.
    • B: Flag the server for removal if the monthly arithmetic mean or RMS for all types of data is less than N% of the moving arithmetic mean or RMS described in the next sentence. If the arithmetic mean or RMS is higher than the first M-days' calculation, or if the arithmetic mean or RMS is lower than the first M-days' calculation by less than K%, recalculate the moving arithmetic mean or RMS with that period's data and the first M-days' data and any prior period's data where the arithmetic mean or RMS also exceeded the calculation from first M days or this period's arithmetic mean or RMS is lower than the first M-days' calculation by less than K%. N is dependent on your site's requirements and server behavior. Carl suggests N=50. I suggest K=(1/4)(1-N%)
    • C: Flag the server for removal if, for all data types, the data type's monthly arithmetic mean is lower than N standard deviations from the first M-days' arithmetic mean. N is dependent on your site's requirements and server behavior. I suggest N=1.
    • D: Flag the server for removal if, for all data types, each data type's arithmetic mean for this period is lower than N standard deviations from the moving arithmetic mean described in the next sentence. If the arithmetic mean is higher than the first M-days' calculation, or if arithmetic mean is lower than (1/K)(N standard deviations) from the first M-days' calculation, recalculate the arithmetic mean with this period's data and the first M-days' data and any prior period's data where the arithmetic mean also exceeded the first M-days or was lower by less than (1/K)(N standard deviations). N is dependent on your site's requirements and server behavior. I suggest N=1 and K=4.
  • Methods B and D take into account the case where a server reaches its normal workload beyond the M-day initial period.
  • Methods A and B will probably be easier to program vs. C and D.  C and D are driven by a specific server's usage patterns.

Friday, June 15, 2012

Opinion: Necessary 21st Century changes to American English grammar...

Trailing punctuation MUST be written outside quotes

In this technical age, punctuation has consequences. Anything that does not relate the contents of the quoted material should be placed outside of the closing quote. For example:

Please run "rm -rf file."

instead of 


Please run "rm -rf file".

will result in an error if "file." does not exist. However if the file "file." does exist, it will be deleted instead. This unintended deletion could have bad consequences.

Or,

Tell him, "Please run ls -l mydocument.doc."

will fail if you meant


Tell him, "Please run ls -l mydocument.doc".

resulting a frustrated computer user.

Computers are too literal to keep using archaic 19th century punctuation standards.


Thursday, June 14, 2012

How do I clean up old large files on Linux?

Many people who have run Linux file servers and ftp servers have at some point wanted to free up some space. One good algorithm to do this efficiently is to remove old data starting with the largest files first. So how to generate such a list? One method is to use a "find -exec du" command:

find /path/to/full/file/system -type f -mtime +10 -exec du -sk {} \; | sort -n > /var/tmp/list_of_files_older_than_10_days_sorted_by_size
Once you have that list, you can selectively delete files from the bottom of it. Note that the list will likely be exponentially sorted. That is, the bottom 10% of the list will take up a huge chunk of the used storage space.

Tuesday, June 5, 2012

How the find the Active Directory Domain Controllers listed in DNS on Linux...

Assumptions:

  • You have the "host" utility from BIND.
  • You can do a zone transfer from the local DNS server
  • Your Active Directory admins have properly configured DNS for Active Directory
If you have the above, use the following command:

host -t srv -l your.active.directory.dns.domain | grep _kerberos._tcp.*._sites.dc._msdcs.your.active.directory.dns.domain

Replace your.active.directory.dns.domain with your actual AD DNS domain.

Monday, June 4, 2012

On Linux, how do I set the PATH for non-interactive, non-login shells? e.g. for the case of rksh?

Non-interactive, non-login, shells inherit the PATH from the ssh process, so we must set the PATH with ssh. Some shells, like Korn Shell (ksh, rksh, pksh), only parse user environment files in login shells, so there's no way to change the inherited environment in non-interactive, non-login shells.
  • To set the path globally, build a custom ssh with the needed default path.
  • To set the path for a particular user, first configure ssh to use custom environments by enabling "PermitUserEnvironment" in /etc/ssh/sshd_config: PermitUserEnvironment yes
  • Restart sshd
  • Then set the path in that user's authorized_keys file or using ~/.ssh/environment.
  • Note that you need to set all of the important shell variables. The existence of ~/.ssh/environment seems to preclude the setting of default environmental variable values.
  • So, for example, given a location for binaries for rksh (restricted korn shell), /usr/restricted/bin, place the following in ~/.ssh/environment:
HOME=/home/username
LOGNAME=username
MAIL=/var/mail/username
PATH=/usr/restricted/bin
PWD=/home/username
SHELL=/bin/rksh
SHLVL=1
USER=username
 

  • Note: replace username with the login of the user. Then, optionally, lock down write access to ~/.ssh/environment:
    • Set the classical permissions:
      • chown root:root /home/username/.ssh/environment
      • chmod 644 /home/username/.ssh/environment
    • Or, place file in an restricted SE Linux context and then configure an SE Linux ACL restricting access.
    • Or, set a Posix ACL on the file to limit access.

    Wednesday, May 9, 2012

    I want 100% Accurate Spam Email Filtering. Why Can't I Have It?

    Proposal: 100% accurate email filtering is infinitely expensive.


    Note: I worked this out from scratch on 2012-05-09 to understand a situation at work. That being said, I am sure an analysis of this topic has been published academically before now. I generally don't have access to academic journals, so if you know of a paper that covers this issue, please post it in the comments.

    Justification:

    Let's look at the percentage of actual "real" email to spam email and the percentage of email considered "real" by your filter at an instantaneous point in time:

    For simplicity, consider that, at any given time, your filters will either be too aggressive or not aggressive enough. That is, you will either be filtering real mail (Rfilter is above Ractual) or you will be letting spam email through your filters ( Rfilter is below Ractual ). In reality, both happen at the same time. In this simplified case, if your filter is too aggressive or too lax, you adjust it to push Rfilter closer to Ractual.


    Over time Ractual varies:
    Knowing Ractual implies 100% accurate filtering. Knowing Ractual implies that the sender of each email notifies its recipients of the real/spam nature of the sent email before sending it and that such a distinction can be made. We know that spammers do not do this and we know that what recipients consider spam varies by recipient, so the best we can do is to judge emails coming into an email system and then make an adjustment to Rfilter, if needed.

    This process of adjustment produces a function, Rfilter, that approximates Ractual :
    Each adjustment is done by some process. That process takes some amount of time. A company can either pay an employee to spend the time to follow that process or outsource that process to another company. In either case, making an adjustment to Rfilter costs a certain amount of money:
    Over some time period, you make a certain number of adjustments to Rfilter to approximate Ractual.  To make the approximation more accurate, you need to make more adjustments to Rfilter in the same amount of time:
    Since each adjustment costs a certain amount of money, the cost of that period of time grows with the density of adjustments in that time period.  To get 100% accurate email filtering, that is, to make Rfilter = Ractual, you have to make the period between adjustments equal zero. In other words, you would have to make infinite adjustments to Rfilter in a certain time period, thus making that time period infinitely expensive. This is the case no matter how inexpensive you make the adjustment process.

    Thus, a company with finite resources can never have 100% accurate spam email filtering unless every sender announces ahead of time the spam/real nature of their email and such a distinction can be made.


    Note that Ractual is actually non-continuous since it has an exact value at the arrival of each new email.  For any more than a trivial amount of arriving email, the time delta between email arrivals allows us to consider it continuous.


    Although you cannot achieve 100% accuracy, you can drive up the accuracy of Rfilter by driving down the cost of each adjustment. Lower cost adjustments can be made more frequently for the same amount of money per a time period. Bayesian email filters have been remarkably effective in this regard.

    -Adam (a0f29b982)

    Tuesday, April 24, 2012

    Idea: Bed Bug Killing Hotel Room

    Idea: A hotel room that has an integrated heating system and process to kill bed bugs between guests.

    (If it hasn't been done already. ;-)
    • Comprehensive re-engineering of everything in a hotel room to withstand 150 F temperatures, including:
      • Electronics
      • Wood finishes.
      • Bedding materials.
      • Carpeting.
      • Window treatments.
      • and other materials and items that stay in the room between guests.
    • Integrated heating system (perhaps even the normal heating system, reprogrammed.), that will bring the room up to 150 degrees F for the time period needed to kill bed bugs.
      • Room completely sealed to be airtight relative to other rooms, except for airtight resealable ventilation ducts. 
      • Movement, vapor, smoke, sound sensors in the room and under the bed, desk, and tables.
      • Safety cutoffs for smoke, unknown vapors, movement, sound, etc. that kill process and open door.
      • Process can only be triggered from outside the room.
      • Picto-graphic activation system to allow for room maids for whom to local language is not their first language.  One method could be to use a specially marked door lock card.
      • Door auto-locks until the heating process is done to prevent external ingress.
      • Door has release from the inside that kills process and unlocks door and sends notification.
      • Activation is part of the pre-guest room checklist.
      • Hotel policy enforced by an information system that prevents the allocation of a room before it has had the bed bug treatment.
      • Reporting for state/city inspections.
    • Tax and insurance breaks for hotels with this system to encourage adoption.
    Update! The basic concept has been tested here
    Creative Commons License
    Idea: Bed Bug Killing Hotel Room. by Adam Keck is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
    Based on a work at www.bashedupbits.com.
    Permissions beyond the scope of this license may be available at adam.s.keck@gmail.com.
    Copyright 2012.

    Friday, March 16, 2012

    How to model storage growth for an organization: An approach.

    I have always wondered about how to model storage growth. This post is the beginning of an answer and will be living document on the topic going forward. I will fill in more analysis and revise it as time permits and as I learn more. I don't have time (yet) to do experiments on this proposal.

    I have heard the data growth question in various forms:
    • How much space will we use next year?
    • How much data will we have next year?
    • Can you model the growth of our data?
    • How much storage should we buy?
    • What method can I use for predicting future storage growth (or data growth)?
    As often happens in the analysis of systems that change over time, I think the right model lies in solving a relationship of of rates of change (differentials). The most practical application of my proposal may be to take measurements of the data in your organization over time and then determine the your specific model numerically.

    My proposal for modeling storage growth:



    Determine the set of data generators (home directories, sandboxes, log files, etc.) in your organization. Determine the growth curve for each data generator and then sum the growth curves.  The total growth curve is the sum of the growth curves of your data generators.
    (1)

    Additionally, I propose that one can determine S(t) determined analytically by first determining the change in the number of each data generator (n) and the change in the average size each generator (g). Then S follows from:
    (2)

    A practical method for using this proposal:



    You should be able to model S(t) directly through observation. Determine all of the data generators in your organization. Record their size over a time period. Find the best fit function for their growth over time.  Sum those best-fit functions to determine your organization's total growth curve, S(t).

    Alternatively, measure size of the growth of the number of each data generator (n) and the growth in the size of the average size that each data generator (g) in your organization over time. Find best-fit curves for each n and g. Plug the derivatives of each n and g into the lower relationship above and then integrate to find each s.

    This latter technique also gives the avenue for testing this proposal. Determine g and n analytically, and then s for each data generator and then compare to the curve fits for each s, n, and g.

    Details of the analytic approach:

    is the total growth of the data in your organization as a function of time.
    is the data generated by a particular data generator as a function of time. Data generators are things like home directories, sandboxes, or logs.

    I propose that the second derivative of the growth curve of a data generator is equal to the product of the change in the size each data generator instance and the change in the number data generator instances.

    (3)


    where

    is the number of data generator instances as a function of time and

    is the size of each instance of a data generator as a function of time.

    So,
    (4)


    where g and n depend on the nature of the data generator.


    Example: linear home directory growth with linear employee growth


    For example for each employee, there exists a home directory. This means that n for home directories will depend on the model that determines the number of employees as a function of time. This model may have a periodic element to account for expansion and contraction due to the business cycle.

    In a simple case, the size of a home directory and the number of employees both grow linearly.
    (5)
    (6)

    Then s for home directories would show quadratic growth of data over time:


    (7)

    Where A and B are measured and C and D depend on the initial average size of a home directory and the initial number of employees.


    Questions for further consideration:

    • What are the most common data generators? Certainly home directories, logs, and email accounts - but what else?
    • Is there a strict concrete definition of a data generator? If so, what is it? Having one will help us create experiments to test the proposals above.
    • What are the shapes of the g functions for common data generators? For example, consider a new non-developer. Their home directory, consisting only of the standard skeleton, starts off at some size K. What is the shape of the growth curve of their home directory? Answering this question will help both those of us modeling storage growth and those of us monitoring storage growth. An out-of-model home directory, for example, could be flagged for investigation.
    • What are the shapes of the n functions for common data generators? What are the shapes of the growth curve for the number of employees as a function of time? What is the shape of the growth in the log data generated by a particular server as a function of time?


    Related reading on modeling data growth I've found so far:


    -Adam Keck

    (a0f29b982)





    Creative Commons License
    How to model storage growth for an organization: An approach. by Adam Keck is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
    Based on a work at www.bashedupbits.com.
    Permissions beyond the scope of this license may be available at adam.s.keck@gmail.com.
    Copyright 2012.

    Monday, February 13, 2012

    Opinion: Should I use Linux, Mac, or Windows?




    "A craftswoman never blames her tools."

    For most, whether or not you can get your work done well on a certain platform is all that matters.




    To me, the three major operating platforms are tools that all have strengths and weaknesses. In the same way that I wouldn't use my nice chisels to loosen a laptop screw, I wouldn't use a MacBook for writing code for our Linux infrastructure. I am more efficient doing that work on Linux itself.


    At the same time, I shoot photos and video, and do some writing to take a break from IT. I've tried doing that work using the included tools on all three platforms. I find the Mac platform the most efficient and trouble-free for that work. I can do the work on Linux as well, but Linux has frustrating workflow gaps - especially regarding video.


    At work, even though we have a heterogenous server environment, we communicate using Microsoft Office, SharePoint, and Lync. My opinion of those tools does not matter. We chose them for communication and therefore I need them to work well. Thus, at work I use Windows 7 with PuTTY, Gnu Screen, and several Linux VMs to do my Linux systems engineering. At home I use a MacBook with iLife and a Linux VM. These two setups let me use the three PC platforms for the workflows for which they seem best suited. [1]


    I think it's missing the point to debate which is the one true platform. We all have things we want to do, things we want to create. In my experience, the question is not "which platform is better in general?", it's "on which platform can I most easily get my work done?". If my current platform no longer works well, I try the others. In the end, I'm paid more for getting more work done in less time, so the efficiency of a platform for that work decides the question.




    -Adam Keck

    (a0f29b982)





    [1] Note that there are six major personal computing platforms today, Linux, Mac, Windows, iOS, Android, and Web, so the landscape is actually more complex. Many tasks that were once the purview of the desktop/laptop platforms have been reimplemented with better workflows on the web and mobile platforms. Some people I know find a decent browser sufficient for all the personal computing tasks they need or want to do. Others use only their iPads for everything.

    Friday, February 3, 2012

    Shell one liner to analyze sendmail mail queue for mail "bomb" sources

    If your sendmail server gets "bombed" by some sender, one task you may need to do is to find the most common patterns in the massive pile up of mail in your queues. This one liner counts Subject, To, and From fields from the qf files, and then counts the list. With the double sort, it's a bit on the inefficient side, but it may help you anyway.

    find /local/apps/mail/spool/mqueue -type f -name "qf*" -exec cat {} \; \
    | awk -F: '/From|To|Subject/ {for(k=2;k<=NF;++k)printf $k; print "\n"}' \
    | sort \
    | uniq -c \
    | sort -n
    
    
    I have broken line across multiple lines for clarity by escaping the ends. You may want to paste the sections in to one line for convenience. In that case drop the trailing '\'s.
    

    Friday, January 20, 2012

    "cmore": Colorized text paging using vim...


    Sometimes, I want colorized syntax and nice navigation for paging. We can use vim to provide this service. This assumes your terminal client supports the terminal type "xterm-color". If you need another color terminal type, customize accordingly.
    • Install all of the standard vim packages
    • Add alias cmore="TERM=xterm-color vim -R -" to your ~/.profile
    • Add the following [1] to your ~/.vimrc
    syntax on
    hi Comment ctermfg=Blue guifg=Blue
    hi String ctermfg=LightRed guifg=LightRed

    • Reload your profile: source ~/.profile
    • Usage: cat some.script.sh | cmore 
    •  It's vim in read-only mode, so use :q to quit.

    [1] I found the default colors to be too dark.

    Thursday, January 19, 2012

    Intellectual Property Hack: Use copyright law and patent law together.

    Perhaps this is feasible, perhaps not.

    For an invention, first patent it, then copyright everything about it - renderings, 3D files, specifications, design documents, build process documents, manufacturing process documents, exploded views, parts lists, etc - whatever is allowed by your country's copyright laws. Someone could build it after the patent expires, but they may have a hard time communicating about the invention without creating a derivative work. Copyright protection in the U.S.A. potentially extends to over 100 years.