# Linear text segmentation#

Info

• Try this notebook in an executable environment with Binder.

## Introduction#

Linear text segmentation consists in dividing a text into several meaningful segments. Linear text segmentation can be seen as a change point detection task and therefore can be carried out with ruptures. This example performs exactly that on a well-known data set intoduced in [Choi2000].

## Setup#

First we import packages and define a few utility functions. This section can be skipped at first reading.

Library imports.

from pathlib import Path

import nltk
import numpy as np
import ruptures as rpt  # our package
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import regexp_tokenize
from ruptures.base import BaseCost
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from matplotlib.colors import LogNorm

nltk.download("stopwords")
STOPWORD_SET = set(
stopwords.words("english")
)  # set of stopwords of the English language
PUNCTUATION_SET = set("!\"#$%&'()*+,-./:;<=>?@[\\]^_{|}~")  [nltk_data] Downloading package stopwords to /home/runner/nltk_data... [nltk_data] Unzipping corpora/stopwords.zip.   Utility functions. def preprocess(list_of_sentences: list) -> list: """Preprocess each sentence (remove punctuation, stopwords, then stemming.)""" transformed = list() for sentence in list_of_sentences: ps = PorterStemmer() list_of_words = regexp_tokenize(text=sentence.lower(), pattern="\w+") list_of_words = [ ps.stem(word) for word in list_of_words if word not in STOPWORD_SET ] transformed.append(" ".join(list_of_words)) return transformed  def draw_square_on_ax(start, end, ax, linewidth=0.8): """Draw a square on the given ax object.""" ax.vlines( x=[start - 0.5, end - 0.5], ymin=start - 0.5, ymax=end - 0.5, linewidth=linewidth, ) ax.hlines( y=[start - 0.5, end - 0.5], xmin=start - 0.5, xmax=end - 0.5, linewidth=linewidth, ) return ax  ## Data# ### Description# The text to segment is a concatenation of excerpts from ten different documents randomly selected from the so-called Brown corpus (described here). Each excerpt has nine to eleven sentences, amounting to 99 sentences in total. The complete text is shown in Appendix A. These data stem from a larger data set which is thoroughly described in [Choi2000] and can be downloaded here. This is a common benchmark to evaluate text segmentation methods. # Loading the text filepath = Path("../data/text-segmentation-data.txt") original_text = filepath.read_text().split("\n") TRUE_BKPS = [11, 20, 30, 40, 49, 59, 69, 80, 90, 99] # read from the data description print(f"There are {len(original_text)} sentences, from {len(TRUE_BKPS)} documents.")  There are 99 sentences, from 10 documents.   The objective is to automatically recover the boundaries of the 10 excerpts, using the fact that they come from quite different documents and therefore have distinct topics. For instance, in the small extract of text printed in the following cell, an accurate text segmentation procedure would be able to detect that the first two sentences (10 and 11) and the last three sentences (12 to 14) belong to two different documents and have very different semantic fields. # print 5 sentences from the original text start, end = 9, 14 for (line_number, sentence) in enumerate(original_text[start:end], start=start + 1): sentence = sentence.strip("\n") print(f"{line_number:>2}: {sentence}")  10: That could be easily done , but there is little reason in it . 11: It would come down to saying that Fromm paints with a broad brush , and that , after all , is not a conclusion one must work toward but an impression he has from the outset . 12: the effect of the digitalis glycosides is inhibited by a high concentration of potassium in the incubation medium and is enhanced by the absence of potassium ( Wolff , 1960 ) . 13: B. Organification of iodine The precise mechanism for organification of iodine in the thyroid is not as yet completely understood . 14: However , the formation of organically bound iodine , mainly mono-iodotyrosine , can be accomplished in cell-free systems .   ### Preprocessing# Before performing text segmentation, the original text is preprocessed. In a nutshell (see [Choi2000] for more details), • the punctuation and stopwords are removed; • words are reduced to their stems (e.g., "waited" and "waiting" become "wait"); • a vector of word counts is computed. # transform text transformed_text = preprocess(original_text) # print original and transformed ind = 97 print("Original sentence:") print(f"\t{original_text[ind]}") print() print("Transformed:") print(f"\t{transformed_text[ind]}")  Original sentence: Then Heywood Sullivan , Kansas City catcher , singled up the middle and Throneberry was across with what proved to be the winning run . Transformed: heywood sullivan kansa citi catcher singl middl throneberri across prove win run   # Once the text is preprocessed, each sentence is transformed into a vector of word counts. vectorizer = CountVectorizer(analyzer="word") vectorized_text = vectorizer.fit_transform(transformed_text) msg = f"There are {len(vectorizer.get_feature_names())} different words in the corpus, e.g. {vectorizer.get_feature_names()[20:30]}." print(msg)  There are 842 different words in the corpus, e.g. ['acid', 'across', 'act', 'activ', 'actual', 'ad', 'adair', 'addit', 'administ', 'administr'].   Note that the vectorized text representation is a (very) sparse matrix. ## Text segmentation# ### Cost function# To compare (the vectorized representation of) two sentences, [Choi2000] uses the cosine similarity $$k_{\text{cosine}}: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$$: $k_{\text{cosine}}(x, y) := \frac{\langle x \mid y \rangle}{\|x\|\|y\|}$ where $$x$$ and $$y$$ are two $$d$$-dimensionnal vectors of word counts. Text segmentation now amounts to a kernel change point detection (see [Truong2020] for more details). However, this particular kernel is not implemented in ruptures therefore we need to create a custom cost function. (Actually, it is implemented in ruptures but the current implementation does not exploit the sparse structure of the vectorized text representation and can therefore be slow.) Let $$y=\{y_0, y_1,\dots,y_{T-1}\}$$ be a $$d$$-dimensionnal signal with $$T$$ samples. Recall that a cost function $$c(\cdot)$$ that derives from a kernel $$k(\cdot, \cdot)$$ is such that $c(y_{a..b}) = \sum_{t=a}^{b-1} G_{t, t} - \frac{1}{b-a} \sum_{a \leq s < b } \sum_{a \leq t < b} G_{s,t}$ where $$y_{a..b}$$ is the subsignal $$\{y_a, y_{a+1},\dots,y_{b-1}\}$$ and $$G_{st}:=k(y_s, y_t)$$ (see [Truong2020] for more details). In other words, $$(G_{st})_{st}$$ is the $$T\times T$$ Gram matrix of $$y$$. Thanks to this formula, we can now implement our custom cost function (named CosineCost in the following cell). class CosineCost(BaseCost): """Cost derived from the cosine similarity.""" # The 2 following attributes must be specified for compatibility. model = "custom_cosine" min_size = 2 def fit(self, signal): """Set the internal parameter.""" self.signal = signal self.gram = cosine_similarity(signal, dense_output=False) return self def error(self, start, end) -> float: """Return the approximation cost on the segment [start:end]. Args: start (int): start of the segment end (int): end of the segment Returns: segment cost Raises: NotEnoughPoints: when the segment is too short (less than min_size samples). """ if end - start < self.min_size: raise NotEnoughPoints sub_gram = self.gram[start:end, start:end] val = sub_gram.diagonal().sum() val -= sub_gram.sum() / (end - start) return val  ### Compute change points# If the number $$K$$ of change points is assumed to be known, we can use dynamic programming to search for the exact segmentation $$\hat{t}_1,\dots,\hat{t}_K$$ that minimizes the sum of segment costs: $\hat{t}_1,\dots,\hat{t}_K := \text{arg}\min_{t_1,\dots,t_K} \left[ c(y_{0..t_1}) + c(y_{t_1..t_2}) + \dots + c(y_{t_K..T}) \right].$ n_bkps = 9 # there are 9 change points (10 text segments) algo = rpt.Dynp(custom_cost=CosineCost(), min_size=2, jump=1).fit(vectorized_text) predicted_bkps = algo.predict(n_bkps=n_bkps) print(f"True change points are\t\t{TRUE_BKPS}.") print(f"Detected change points are\t{predicted_bkps}.")  True change points are [11, 20, 30, 40, 49, 59, 69, 80, 90, 99]. Detected change points are [12, 19, 30, 40, 49, 59, 70, 78, 94, 99].   (Note that the last change point index is simply the length of the signal. This is by design.) Predicted breakpoints are quite close to the true change points. Indeed, most estimated changes are less than one sentence away from a true change. The last change is less accurately predicted with an error of 4 sentences. To overcome this issue, one solution would be to consider a richer representation (compared to the sparse word frequency vectors). ### Visualize segmentations# Show sentence numbers. In the following cell, the two segmentations (true and predicted) can be visually compared. For each paragraph, the sentence numbers are shown. true_segment_list = rpt.utils.pairwise([0] + TRUE_BKPS) predicted_segment_list = rpt.utils.pairwise([0] + predicted_bkps) for (n_paragraph, (true_segment, predicted_segment)) in enumerate( zip(true_segment_list, predicted_segment_list), start=1 ): print(f"Paragraph n°{n_paragraph:02d}") start_true, end_true = true_segment start_pred, end_pred = predicted_segment start = min(start_true, start_pred) end = max(end_true, end_pred) msg = " ".join( f"{ind+1:02d}" if (start_true <= ind < end_true) else " " for ind in range(start, end) ) print(f"(true)\t{msg}") msg = " ".join( f"{ind+1:02d}" if (start_pred <= ind < end_pred) else " " for ind in range(start, end) ) print(f"(pred)\t{msg}") print()  Paragraph n°01 (true) 01 02 03 04 05 06 07 08 09 10 11 (pred) 01 02 03 04 05 06 07 08 09 10 11 12 Paragraph n°02 (true) 12 13 14 15 16 17 18 19 20 (pred) 13 14 15 16 17 18 19 Paragraph n°03 (true) 21 22 23 24 25 26 27 28 29 30 (pred) 20 21 22 23 24 25 26 27 28 29 30 Paragraph n°04 (true) 31 32 33 34 35 36 37 38 39 40 (pred) 31 32 33 34 35 36 37 38 39 40 Paragraph n°05 (true) 41 42 43 44 45 46 47 48 49 (pred) 41 42 43 44 45 46 47 48 49 Paragraph n°06 (true) 50 51 52 53 54 55 56 57 58 59 (pred) 50 51 52 53 54 55 56 57 58 59 Paragraph n°07 (true) 60 61 62 63 64 65 66 67 68 69 (pred) 60 61 62 63 64 65 66 67 68 69 70 Paragraph n°08 (true) 70 71 72 73 74 75 76 77 78 79 80 (pred) 71 72 73 74 75 76 77 78 Paragraph n°09 (true) 81 82 83 84 85 86 87 88 89 90 (pred) 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 Paragraph n°10 (true) 91 92 93 94 95 96 97 98 99 (pred) 95 96 97 98 99   Show the Gram matrix. In addition, the text segmentation can be shown on the Gram matrix that was used to detect changes. This is done in the following cell. Most segments (represented by the blue squares) are similar between the true segmentation and the predicted segmentation, except for last two. This is mainly due to the fact that, in the penultimate excerpt, all sentences are dissimilar (with respect to the cosine measure). fig, ax_arr = plt.subplots(nrows=1, ncols=2, figsize=(7, 5), dpi=200) # plot config title_fontsize = 10 label_fontsize = 7 title_list = ["True text segmentation", "Predicted text segmentation"] for (ax, title, bkps) in zip(ax_arr, title_list, [TRUE_BKPS, predicted_bkps]): # plot gram matrix ax.imshow(algo.cost.gram.toarray(), cmap=cm.plasma, norm=LogNorm()) # add text segmentation for (start, end) in rpt.utils.pairwise([0] + bkps): draw_square_on_ax(start=start, end=end, ax=ax) # add labels and title ax.set_title(title, fontsize=title_fontsize) ax.set_xlabel("Sentence index", fontsize=label_fontsize) ax.set_ylabel("Sentence index", fontsize=label_fontsize) ax.tick_params(axis="both", which="major", labelsize=label_fontsize)  ## Conclusion# This example shows how to apply ruptures on a text segmentation task. In detail, we detected shifts in the vocabulary of a collection of sentences using common NLP preprocessing and transformation. This task amounts to a kernel change point detection procedure where the kernel is the cosine kernel. Such results can then be used to characterize the structure of the text for subsequent NLP tasks. This procedure should certainly be enriched with more relevant and compact representations to better detect changes. ## Appendix A# The complete text used in this notebook is as follows. Note that the line numbers and the blank lines (added to visually mark the boundaries between excerpts) are not part of the text fed to the segmentation method. for (start, end) in rpt.utils.pairwise([0] + TRUE_BKPS): excerpt = original_text[start:end] for (n_line, sentence) in enumerate(excerpt, start=start + 1): sentence = sentence.strip("\n") print(f"{n_line:>2}: {sentence}") print()   1: The Sane Society is an ambitious work . 2: Its scope is as broad as the question : What does it mean to live in modern society ? ? 3: A work so broad , even when it is directed by a leading idea and informed by a moral vision , must necessarily  fail '' . 4: Even a hasty reader will easily find in it numerous blind spots , errors of fact and argument , important exclusions , areas of ignorance and prejudice , undue emphases on trivia , examples of broad positions supported by flimsy evidence , and the like . 5: Such books are easy prey for critics . 6: Nor need the critic be captious . 7: A careful and orderly man , who values precision and a kind of tough intellectual responsibility , might easily be put off by such a book . 8: It is a simple matter , for one so disposed , to take a work like The Sane Society and shred it into odds and ends . 9: The thing can be made to look like the cluttered attic of a large and vigorous family -- a motley jumble of discarded objects , some outworn and some that were never useful , some once whole and bright but now chipped and tarnished , some odd pieces whose history no one remembers , here and there a gem , everything fascinating because it suggests some part of the human condition -- the whole adding up to nothing more than a glimpse into the disorderly history of the makers and users . 10: That could be easily done , but there is little reason in it . 11: It would come down to saying that Fromm paints with a broad brush , and that , after all , is not a conclusion one must work toward but an impression he has from the outset . 12: the effect of the digitalis glycosides is inhibited by a high concentration of potassium in the incubation medium and is enhanced by the absence of potassium ( Wolff , 1960 ) . 13: B. Organification of iodine The precise mechanism for organification of iodine in the thyroid is not as yet completely understood . 14: However , the formation of organically bound iodine , mainly mono-iodotyrosine , can be accomplished in cell-free systems . 15: In the absence of additions to the homogenate , the product formed is an iodinated particulate protein ( Fawcett and Kirkwood , 1953 ; ; Taurog , Potter and Chaikoff , 1955 ; ; Taurog , Potter , Tong , and Chaikoff , 1956 ; ; Serif and Kirkwood , 1958 ; ; De Groot and Carvalho , 1960 ) . 16: This iodoprotein does not appear to be the same as what is normally present in the thyroid , and there is no evidence so far that thyroglobulin can be iodinated in vitro by cell-free systems . 17: In addition , the iodoamino acid formed in largest quantity in the intact thyroid is di-iodotyrosine . 18: If tyrosine and a system generating hydrogen peroxide are added to a cell-free homogenate of the thyroid , large quantities of free mono-iodotyrosine can be formed ( Alexander , 1959 ) . 19: It is not clear whether this system bears any resemblance to the in vivo iodinating mechanism , and a system generating peroxide has not been identified in thyroid tissue . 20: On chemical grounds it seems most likely that iodide is first converted to Afj and then to Afj as the active iodinating species . 21: the statement empirical , for goodness was not a quality like red or squeaky that could be seen or heard . 22: What were they to do , then , with these awkward judgments of value ? ? 23: To find a place for them in their theory of knowledge would require them to revise the theory radically , and yet that theory was what they regarded as their most important discovery . 24: It appeared that the theory could be saved in one way only . 25: If it could be shown that judgments of good and bad were not judgments at all , that they asserted nothing true or false , but merely expressed emotions like  Hurrah '' or  Fiddlesticks '' , then these wayward judgments would cease from troubling and weary heads could be at rest . 26: This is the course the positivists took . 27: They explained value judgments by explaining them away . 28: Now I do not think their view will do . 29: But before discussing it , I should like to record one vote of thanks to them for the clarity with which they have stated their case . 30: It has been said of John Stuart Mill that he wrote so clearly that he could be found out . 31: Greer Garson , world-famous star of stage , screen and television , will be honored for the high standard in tasteful sophisticated fashion with which she has created a high standard in her profession . 32: As a Neiman-Marcus award winner the titian-haired Miss Garson is a personification of the individual look so important to fashion this season . 33: She will receive the 1961  Oscar '' at the 24th annual Neiman-Marcus Exposition , Tuesday and Wednesday in the Grand Ballroom of the Sheraton-Dallas Hotel . 34: The only woman recipient , Miss Garson will receive the award with Ferdinando Sarmi , creator of chic , beautiful women 's fashions ; ; Harry Rolnick , president of the Byer-Rolnick Hat Corporation and designer of men 's hats ; ; Sydney Wragge , creator of sophisticated casuals for women and Roger Vivier , designer of Christian Dior shoes Paris , France , whose squared toes and lowered heels have revolutionized the shoe industry . 35: The silver and ebony plaques will be presented at noon luncheons by Stanley Marcus , president of Neiman-Marcus , Beneficiary of the proceeds from the two showings will be the Dallas Society for Crippled Children Cerebral Palsy Treatment Center . 36: The attractive Greer Garson , who loves beautiful clothes and selects them as carefully as she does her professional roles , prefers timeless classical designs . 37: Occasionally she deserts the simple and elegant for a fun piece simply because  It 's unlike me '' . 38: In private life , Miss Garson is Mrs. E.E. Fogelson and on the go most of the time commuting from Dallas , where they maintain an apartment , to their California home in Los Angeles ' suburban Bel-Air to their ranch in Pecos , New Mexico . 39: Therefore , her wardrobe is largely mobile , to be packed at a moment 's notice and to shake out without a wrinkle . 40: Her creations in fashion are from many designers because she does n't want a complete wardrobe from any one designer any more than she wants  all of her pictures by one painter '' . 41: Wage-price policies of industry are the result of a complex of forces -- no single explanation has been found which applies to all cases . 42: The purpose of this paper is to analyze one possible force which has not been treated in the literature , but which we believe makes a significant contribution to explaining the wage-price behavior of a few very important industries . 43: While there may be several such industries to which the model of this paper is applicable , the authors make particular claim of relevance to the explanation of the course of wages and prices in the steel industry of the United States since World War 2 . 44: Indeed , the apparent stiffening of the industry 's attitude in the recent steel strike has a direct explanation in terms of the model here presented . 45: The model of this paper considers an industry which is not characterized by vigorous price competition , but which is so basic that its wage-price policies are held in check by continuous critical public scrutiny . 46: Where the industry 's product price has been kept below the  profit-maximizing '' and  entry-limiting '' prices due to fears of public reaction , the profit seeking producers have an interest in offering little real resistance to wage demands . 47: The contribution of this paper is a demonstration of this proposition , and an exploration of some of its implications . 48: In order to focus clearly upon the operation of this one force , which we may call the effect of  public-limit pricing '' on  key '' wage bargains , we deliberately simplify the model by abstracting from other forces , such as union power , which may be relevant in an actual situation . 49: For expository purposes , this is best treated as a model which spells out the conditions under which an important industry affected with the public interest would find it profitable to raise wages even in the absence of union pressures for higher wages . 50: The vast Central Valley of California is one of the most productive agricultural areas in the world . 51: During the summer of 1960 , it became the setting for a bitter and basic labor-management struggle . 52: The contestants in this economic struggle are the Agricultural Workers Organizing Committee ( AWOC ) of the AFL-CIO and the agricultural employers of the State . 53: By virtue of the legal responsibilities of the Department of Employment in the farm placement program , we necessarily found ourselves in the middle between these two forces . 54: It is not a pleasant or easy position , but one we have endeavored to maintain . 55: We have sought to be strictly neutral as between the parties , but at the same time we have been required frequently to rule on specific issues or situations as they arose . 56: Inevitably , one side was pleased and the other displeased , regardless of how we ruled . 57: Often the displeased parties interpreted our decision as implying favoritism toward the other . 58: We have consoled ourselves with the thought that this is a normal human reaction and is one of the consequences of any decision in an adversary proceeding . 59: It is disconcerting , nevertheless , to read in a labor weekly ,  Perluss knuckles down to growers '' , and then to be confronted with a growers ' publication which states ,  Perluss recognizes obviously phony and trumped-up strikes as bona fide '' . 60: Rookie Ron Nischwitz continued his pinpoint pitching Monday night as the Bears made it two straight over Indianapolis , 5-3 . 61: The husky 6-3 , 205-pound lefthander , was in command all the way before an on-the-scene audience of only 949 and countless of television viewers in the Denver area . 62: It was Nischwitz ' third straight victory of the new season and ran the Grizzlies ' winning streak to four straight . 63: They now lead Louisville by a full game on top of the American Association pack . 64: Nischwitz fanned six and walked only Charley Hinton in the third inning . 65: He has given only the one pass in his 27 innings , an unusual characteristic for a southpaw . 66: The Bears took the lead in the first inning , as they did in Sunday 's opener , and never lagged . 67: Dick McAuliffe cracked the first of his two doubles against Lefty Don Rudolph to open the Bear 's attack . 68: After Al Paschal gruonded out , Jay Cooke walked and Jim McDaniel singled home McAuliffe . 69: Alusik then moved Cooke across with a line drive to left . 70: Unemployed older workers who have no expectation of securing employment in the occupation in which they are skilled should be able to secure counseling and retraining in an occupation with a future . 71: Some vocational training schools provide such training , but the current need exceeds the facilities . 72: Current programs The present Federal program of vocational education began in 1917 with the passage of the Smith-Hughes Act , which provided a continuing annual appropriation of$ 7 million to support , on a matching basis , state-administered programs of vocational education in agriculture , trades , industrial skills and home economics .
73: Since 1917 some thirteen supplementary and related acts have extended this Federal program .
74: The George-Barden Act of 1946 raised the previous increases in annual authorizations to $29 million in addition to the$ 7 million under the Smith Act .
75: The Health Amendment Act of 1956 added $5 million for practical nurse training . 76: The latest major change in this program was introduced by the National Defense Education Act of 1958 , Title 8 , of which amended the George-Barden Act . 77: Annual authorizations of$ 15 million were added for area vocational education programs that meet national defense needs for highly skilled technicians .
78: The Federal program of vocational education merely provides financial aid to encourage the establishment of vocational education programs in public schools .
79: The initiative , administration and control remain primarily with the local school districts .
80: Even the states remain primarily in an assisting role , providing leadership and teacher training .

81: briefly , the topping configuration must be examined for its inferences .
82: Then the fact that the lower channel line was pierced had further forecasting significance .
83: And then the application of the count rules to the width ( horizontally ) of the configuration gives us an intial estimate of the probable depth of the decline .
84: The very idea of there being  count rules '' implies that there is some sort of proportion to be expected between the amount of congestive activity and the extent of the breakaway ( run up or run down ) movement .
85: This expectation is what really  sold '' point and figure .
86: But there is no positive and consistently demonstrable relationship in the strictest sense .
87: Experience will show that only the vaguest generalities apply , and in fine , these merely dwell upon a relationship between the durations and intensities of events .
88: After all , too much does not happen too suddenly , nor does very little take long .
89: The advantages and disadvantages of these two types of charting , bar charting and point and figure charting , remain the subject of fairly good-natured litigation among their respective professional advocates , with both methods enjoying in common , one irrevocable merit .
90: They are both trend-following methods .

91: Miami , Fla. , March 17 -- The Orioles tonight retained the distinction of being the only winless team among the eighteen Major-League clubs as they dropped their sixth straight spring exhibition decision , this one to the Kansas City Athletics by a score of 5 to 3 .
92: Indications as late as the top of the sixth were that the Birds were to end their victory draught as they coasted along with a 3-to-o advantage .
93: Siebern hits homer Over the first five frames , Jack Fisher , the big righthander who figures to be in the middle of Oriole plans for a drive on the 1961 American League pennant , held the A 's scoreless while yielding three scattered hits .
94: Then Dick Hyde , submarine-ball hurler , entered the contest and only five batters needed to face him before there existed a 3-to-3 deadlock .
95: A two-run homer by Norm Siebern and a solo blast by Bill Tuttle tied the game , and single runs in the eighth and ninth gave the Athletics their fifth victory in eight starts .
96: House throws wild With one down in the eighth , Marv Throneberry drew a walk and stole second as Hyde fanned Tuttle .
97: Catcher Frank House 's throw in an effort to nab Throneberry was wide and in the dirt .
98: Then Heywood Sullivan , Kansas City catcher , singled up the middle and Throneberry was across with what proved to be the winning run .
99: Rookie southpaw George Stepanovich relieved Hyde at the start of the ninth and gave up the A 's fifth tally on a walk to second baseman Dick Howser , a wild pitch , and Frank Cipriani 's single under Shortstop Jerry Adair 's glove into center .


`

## Authors#

This example notebook has been authored by Olivier Boulant and edited by Charles Truong.

## References#

[Choi2000] Choi, F. Y. Y. (2000). Advances in domain independent linear text segmentation. Proceedings of the North American Chapter of the Association for Computational Linguistics Conference (NAACL), 26–33.

[Truong2020] Truong, C., Oudre, L., & Vayatis, N. (2020). Selective review of offline change point detection methods. Signal Processing, 167.