Run this notebook online: or Colab:
14.3. The Dataset for Pretraining Word Embedding¶
In this section, we will introduce how to preprocess a dataset with negative sampling Section 14.2 and load into minibatches for word2vec training. The dataset we use is Penn Tree Bank (PTB), which is a small but commonly-used corpus. It takes samples from Wall Street Journal articles and includes training sets, validation sets, and test sets.
First, import the packages and modules required for the experiment.
%load ../utils/djl-imports
%load ../utils/plot-utils
%load ../utils/Functions.java
%load ../utils/PlotUtils.java
%load ../utils/StopWatch.java
%load ../utils/Accumulator.java
%load ../utils/Animator.java
%load ../utils/Training.java
%load ../utils/timemachine/Vocab.java
import java.util.stream.*;
import org.apache.commons.math3.distribution.EnumeratedDistribution;
NDManager manager = NDManager.newBaseManager();
14.3.1. Reading and Preprocessing the Dataset¶
This dataset has already been preprocessed. Each line of the dataset acts as a sentence. All the words in a sentence are separated by spaces. In the word embedding task, each word is a token.
public static String[][] readPTB() throws IOException {
String ptbURL = "http://d2l-data.s3-accelerate.amazonaws.com/ptb.zip";
InputStream input = new URL(ptbURL).openStream();
ZipUtils.unzip(input, Paths.get("./"));
ArrayList<String> lines = new ArrayList<>();
File file = new File("./ptb/ptb.train.txt");
Scanner myReader = new Scanner(file);
while (myReader.hasNextLine()) {
lines.add(myReader.nextLine());
}
String[][] tokens = new String[lines.size()][];
for (int i = 0; i < lines.size(); i++) {
tokens[i] = lines.get(i).trim().split(" ");
}
return tokens;
}
String[][] sentences = readPTB();
System.out.println("# sentences: " + sentences.length);
# sentences: 42068
Next we build a vocabulary with words appeared not greater than 10 times mapped into a “<unk>” token. Note that the preprocessed PTB data also contains “<unk>” tokens presenting rare words.
Vocab vocab = new Vocab(sentences, 10, new String[] {});
System.out.println(vocab.length());
6719
14.3.2. Subsampling¶
In text data, there are generally some words that appear at high frequencies, such “the”, “a”, and “in” in English. Generally speaking, in a context window, it is better to train the word embedding model when a word (such as “chip”) and a lower-frequency word (such as “microprocessor”) appear at the same time, rather than when a word appears with a higher-frequency word (such as “the”). Therefore, when training the word embedding model, we can perform subsampling on the words [Mikolov et al., 2013b]. Specifically, each indexed word \(w_i\) in the dataset will drop out at a certain probability. The dropout probability is given as:
Here, \(f(w_i)\) is the ratio of the instances of word \(w_i\) to the total number of words in the dataset, and the constant \(t\) is a hyperparameter (set to \(10^{-4}\) in this experiment). As we can see, it is only possible to drop out the word \(w_i\) in subsampling when \(f(w_i) > t\). The higher the word’s frequency, the higher its dropout probability.
public static boolean keep(String token, LinkedHashMap<?, Integer> counter, int numTokens) {
// Return True if to keep this token during subsampling
return new Random().nextFloat() < Math.sqrt(1e-4 / counter.get(token) * numTokens);
}
public static String[][] subSampling(String[][] sentences, Vocab vocab) {
for (int i = 0; i < sentences.length; i++) {
for (int j = 0; j < sentences[i].length; j++) {
sentences[i][j] = vocab.idxToToken.get(vocab.getIdx(sentences[i][j]));
}
}
// Count the frequency for each word
LinkedHashMap<?, Integer> counter = vocab.countCorpus2D(sentences);
int numTokens = 0;
for (Integer value : counter.values()) {
numTokens += value;
}
// Now do the subsampling
String[][] output = new String[sentences.length][];
for (int i = 0; i < sentences.length; i++) {
ArrayList<String> tks = new ArrayList<>();
for (int j = 0; j < sentences[i].length; j++) {
String tk = sentences[i][j];
if (keep(sentences[i][j], counter, numTokens)) {
tks.add(tk);
}
}
output[i] = tks.toArray(new String[tks.size()]);
}
return output;
}
String[][] subsampled = subSampling(sentences, vocab);
Compare the sequence lengths before and after sampling, we can see subsampling significantly reduced the sequence length.
double[] y1 = new double[sentences.length];
for (int i = 0; i < sentences.length; i++) y1[i] = sentences[i].length;
double[] y2 = new double[subsampled.length];
for (int i = 0; i < subsampled.length; i++) y2[i] = subsampled[i].length;
HistogramTrace trace1 =
HistogramTrace.builder(y1).opacity(.75).name("origin").nBinsX(20).build();
HistogramTrace trace2 =
HistogramTrace.builder(y2).opacity(.75).name("subsampled").nBinsX(20).build();
Layout layout =
Layout.builder()
.barMode(Layout.BarMode.GROUP)
.showLegend(true)
.xAxis(Axis.builder().title("# tokens per sentence").build())
.yAxis(Axis.builder().title("count").build())
.build();
new Figure(layout, trace1, trace2);
For individual tokens, the sampling rate of the high-frequency word “the” is less than 1/20.
public static String compareCounts(String token, String[][] sentences, String[][] subsampled) {
int beforeCount = 0;
for (int i = 0; i < sentences.length; i++) {
for (int j = 0; j < sentences[i].length; j++) {
if (sentences[i][j].equals(token)) beforeCount += 1;
}
}
int afterCount = 0;
for (int i = 0; i < subsampled.length; i++) {
for (int j = 0; j < subsampled[i].length; j++) {
if (subsampled[i][j].equals(token)) afterCount += 1;
}
}
return "# of \"the\": before=" + beforeCount + ", after=" + afterCount;
}
System.out.println(compareCounts("the", sentences, subsampled));
# of "the": before=50770, after=2084
But the low-frequency word “join” is completely preserved.
System.out.println(compareCounts("join", sentences, subsampled));
# of "the": before=45, after=45
Last, we map each token into an index to construct the corpus.
Integer[][] corpus = new Integer[subsampled.length][];
for (int i = 0; i < subsampled.length; i++) {
corpus[i] = vocab.getIdxs(subsampled[i]);
}
for (int i = 0; i < 3; i++) {
System.out.println(Arrays.toString(corpus[i]));
}
[]
[392, 32, 2115, 145, 274, 406, 2]
[140, 5277, 3054, 1580]
14.3.3. Loading the Dataset¶
Next we read the corpus with token indicies into data batches for training.
14.3.3.1. Extracting Central Target Words and Context Words¶
We use words with a distance from the central target word not exceeding
the context window size as the context words of the given center target
word. The following definition function extracts all the central target
words and their context words. It uniformly and randomly samples an
integer to be used as the context window size between integer 1 and the
maxWindowSize
(maximum context window).
public static Pair<ArrayList<Integer>, ArrayList<ArrayList<Integer>>> getCentersAndContext(
Integer[][] corpus, int maxWindowSize) {
ArrayList<Integer> centers = new ArrayList<>();
ArrayList<ArrayList<Integer>> contexts = new ArrayList<>();
for (Integer[] line : corpus) {
// Each sentence needs at least 2 words to form a "central target word
// - context word" pair
if (line.length < 2) {
continue;
}
centers.addAll(Arrays.asList(line));
for (int i = 0; i < line.length; i++) { // Context window centered at i
int windowSize = new Random().nextInt(maxWindowSize - 1) + 1;
List<Integer> indices =
IntStream.range(
Math.max(0, i - windowSize),
Math.min(line.length, i + 1 + windowSize))
.boxed()
.collect(Collectors.toList());
// Exclude the central target word from the context words
indices.remove(indices.indexOf(i));
ArrayList<Integer> context = new ArrayList<>();
for (Integer idx : indices) {
context.add(line[idx]);
}
contexts.add(context);
}
}
return new Pair<>(centers, contexts);
}
Next, we create an artificial dataset containing two sentences of 7 and 3 words, respectively. Assume the maximum context window is 2 and print all the central target words and their context words.
Integer[][] tinyDataset =
new Integer[][] {
IntStream.range(0, 7)
.boxed()
.collect(Collectors.toList())
.toArray(new Integer[] {}),
IntStream.range(7, 10)
.boxed()
.collect(Collectors.toList())
.toArray(new Integer[] {})
};
System.out.println("dataset " + Arrays.deepToString(tinyDataset));
Pair<ArrayList<Integer>, ArrayList<ArrayList<Integer>>> centerContextPair =
getCentersAndContext(tinyDataset, 2);
for (int i = 0; i < centerContextPair.getValue().size(); i++) {
System.out.println(
"Center "
+ centerContextPair.getKey().get(i)
+ " has contexts"
+ centerContextPair.getValue().get(i));
}
dataset [[0, 1, 2, 3, 4, 5, 6], [7, 8, 9]]
Center 0 has contexts[1]
Center 1 has contexts[0, 2]
Center 2 has contexts[1, 3]
Center 3 has contexts[2, 4]
Center 4 has contexts[3, 5]
Center 5 has contexts[4, 6]
Center 6 has contexts[5]
Center 7 has contexts[8]
Center 8 has contexts[7, 9]
Center 9 has contexts[8]
We set the maximum context window size to 5. The following extracts all the central target words and their context words in the dataset.
centerContextPair = getCentersAndContext(corpus, 5);
ArrayList<Integer> allCenters = centerContextPair.getKey();
ArrayList<ArrayList<Integer>> allContexts = centerContextPair.getValue();
System.out.println("# center-context pairs:" + allCenters.size());
# center-context pairs:353293
14.3.3.2. Negative Sampling¶
We use negative sampling for approximate training. For a central and context word pair, we randomly sample \(K\) noise words (\(K=5\) in the experiment). According to the suggestion in the Word2vec paper, the noise word sampling probability \(P(w)\) is the ratio of the word frequency of \(w\) to the total word frequency raised to the power of 0.75 [Mikolov et al., 2013b].
We first define a class to draw a candidate according to the sampling weights. It caches a 10,000 size random number bank.
public class RandomGenerator {
/* Draw a random int in [0, n] according to n sampling weights. */
private List<Integer> population;
private List<Double> samplingWeights;
private List<Integer> candidates;
private List<org.apache.commons.math3.util.Pair<Integer, Double>> pmf;
private int i;
public RandomGenerator(List<Double> samplingWeights) {
this.population =
IntStream.range(0, samplingWeights.size()).boxed().collect(Collectors.toList());
this.samplingWeights = samplingWeights;
this.candidates = new ArrayList<>();
this.i = 0;
this.pmf = new ArrayList<>();
for (int i = 0; i < samplingWeights.size(); i++) {
this.pmf.add(new org.apache.commons.math3.util.Pair(this.population.get(i), this.samplingWeights.get(i).doubleValue()));
}
}
public Integer draw() {
if (this.i == this.candidates.size()) {
this.candidates =
Arrays.asList((Integer[]) new EnumeratedDistribution(this.pmf).sample(10000, new Integer[] {}));
this.i = 0;
}
this.i += 1;
return this.candidates.get(this.i - 1);
}
}
RandomGenerator generator =
new RandomGenerator(Arrays.asList(new Double[] {2.0, 3.0, 4.0}));
Integer[] generatorOutput = new Integer[10];
for (int i = 0; i < 10; i++) {
generatorOutput[i] = generator.draw();
}
System.out.println(Arrays.toString(generatorOutput));
[2, 2, 0, 2, 2, 2, 1, 2, 1, 2]
public static ArrayList<ArrayList<Integer>> getNegatives(
ArrayList<ArrayList<Integer>> allContexts, Integer[][] corpus, int K) {
LinkedHashMap<?, Integer> counter = Vocab.countCorpus2D(corpus);
ArrayList<Double> samplingWeights = new ArrayList<>();
for (Map.Entry<?, Integer> entry : counter.entrySet()) {
samplingWeights.add(Math.pow(entry.getValue(), .75));
}
ArrayList<ArrayList<Integer>> allNegatives = new ArrayList<>();
RandomGenerator generator = new RandomGenerator(samplingWeights);
for (ArrayList<Integer> contexts : allContexts) {
ArrayList<Integer> negatives = new ArrayList<>();
while (negatives.size() < contexts.size() * K) {
Integer neg = generator.draw();
// Noise words cannot be context words
if (!contexts.contains(neg)) {
negatives.add(neg);
}
}
allNegatives.add(negatives);
}
return allNegatives;
}
ArrayList<ArrayList<Integer>> allNegatives = getNegatives(allContexts, corpus, 5);
14.3.3.3. Reading into Batches¶
We extract all central target words allCenters
, and the context
words allContexts
and noise words allNegatives
of each central
target word from the dataset. We will read them in random minibatches.
In a minibatch of data, the \(i^\mathrm{th}\) example includes a
central word and its corresponding \(n_i\) context words and
\(m_i\) noise words. Since the context window size of each example
may be different, the sum of context words and noise words,
\(n_i+m_i\), will be different. When constructing a minibatch, we
concatenate the context words and noise words of each example, and add
0s for padding until the length of the concatenations are the same, that
is, the length of all concatenations is
\(\max_i n_i+m_i\)(maxLen
). In order to avoid the effect of
padding on the loss function calculation, we construct the mask variable
masks
, each element of which corresponds to an element in the
concatenation of context and noise words, contextsNegatives
. When an
element in the variable contextsNegatives
is a padding, the element
in the mask variable masks
at the same position will be 0.
Otherwise, it takes the value 1. In order to distinguish between
positive and negative examples, we also need to distinguish the context
words from the noise words in the contextsNegatives
variable. Based
on the construction of the mask variable, we only need to create a label
variable labels
with the same shape as the contextsNegatives
variable and set the elements corresponding to context words (positive
examples) to 1, and the rest to 0.
Next, we will implement the minibatch reading function batchifyData
.
Its minibatch input data
is a list of NDArrays
, each element of
which contains central target words center
, context words
context
, and noise words negative
. The minibatch data returned
by this function conforms to the format we need, for example, it
includes the mask variable.
public static NDList batchifyData(NDList[] data) {
NDList centers = new NDList();
NDList contextsNegatives = new NDList();
NDList masks = new NDList();
NDList labels = new NDList();
long maxLen = 0;
for (NDList ndList : data) { // center, context, negative = ndList
maxLen =
Math.max(
maxLen,
ndList.get(1).countNonzero().getLong()
+ ndList.get(2).countNonzero().getLong());
}
for (NDList ndList : data) { // center, context, negative = ndList
NDArray center = ndList.get(0);
NDArray context = ndList.get(1);
NDArray negative = ndList.get(2);
int count = 0;
for (int i = 0; i < context.size(); i++) {
// If a 0 is found, we want to stop adding these
// values to NDArray
if (context.get(i).getInt() == 0) {
break;
}
contextsNegatives.add(context.get(i).reshape(1));
masks.add(manager.create(1).reshape(1));
labels.add(manager.create(1).reshape(1));
count += 1;
}
for (int i = 0; i < negative.size(); i++) {
// If a 0 is found, we want to stop adding these
// values to NDArray
if (negative.get(i).getInt() == 0) {
break;
}
contextsNegatives.add(negative.get(i).reshape(1));
masks.add(manager.create(1).reshape(1));
labels.add(manager.create(0).reshape(1));
count += 1;
}
// Fill with zeroes remaining array
while (count != maxLen) {
contextsNegatives.add(manager.create(0).reshape(1));
masks.add(manager.create(0).reshape(1));
labels.add(manager.create(0).reshape(1));
count += 1;
}
// Add this NDArrays to output NDArrays
centers.add(center.reshape(1));
}
return new NDList(
NDArrays.concat(centers).reshape(data.length, -1),
NDArrays.concat(contextsNegatives).reshape(data.length, -1),
NDArrays.concat(masks).reshape(data.length, -1),
NDArrays.concat(labels).reshape(data.length, -1));
}
Construct two simple examples:
NDList x1 =
new NDList(
manager.create(new int[] {1}),
manager.create(new int[] {2, 2}),
manager.create(new int[] {3, 3, 3, 3}));
NDList x2 =
new NDList(
manager.create(new int[] {1}),
manager.create(new int[] {2, 2, 2}),
manager.create(new int[] {3, 3}));
NDList batchedData = batchifyData(new NDList[] {x1, x2});
String[] names = new String[] {"centers", "contexts_negatives", "masks", "labels"};
for (int i = 0; i < batchedData.size(); i++) {
System.out.println(names[i] + " shape: " + batchedData.get(i));
}
centers shape: ND: (2, 1) gpu(0) int32
[[ 1],
[ 1],
]
contexts_negatives shape: ND: (2, 6) gpu(0) int32
[[ 2, 2, 3, 3, 3, 3],
[ 2, 2, 2, 3, 3, 0],
]
masks shape: ND: (2, 6) gpu(0) int32
[[ 1, 1, 1, 1, 1, 1],
[ 1, 1, 1, 1, 1, 0],
]
labels shape: ND: (2, 6) gpu(0) int32
[[ 1, 1, 0, 0, 0, 0],
[ 1, 1, 1, 0, 0, 0],
]
We use the batchifyData
function just defined to specify the
minibatch reading method for the ArrayDataset
instance iterator.
14.3.4. Putting All Things Together¶
Last, we define the loadDataPTB
function that read the PTB dataset
and return the dataset. In addition, we will create a function called
convertNDArray
that will convert the centers
, contexts
, and
negatives
lists, into NDArrays
by putting 0s where there is no
data in order for the rows to have the same lenghts.
public static NDList convertNDArray(Object[] data, NDManager manager) {
ArrayList<Integer> centers = (ArrayList<Integer>) data[0];
ArrayList<ArrayList<Integer>> contexts = (ArrayList<ArrayList<Integer>>) data[1];
ArrayList<ArrayList<Integer>> negatives = (ArrayList<ArrayList<Integer>>) data[2];
// Create centers NDArray
NDArray centersNDArray = manager.create(centers.stream().mapToInt(i -> i).toArray());
// Create contexts NDArray
int maxLen = 0;
for (ArrayList<Integer> context : contexts) {
maxLen = Math.max(maxLen, context.size());
}
// Fill arrays with 0s to all have same lengths and be able to create NDArray
for (ArrayList<Integer> context : contexts) {
while (context.size() != maxLen) {
context.add(0);
}
}
NDArray contextsNDArray =
manager.create(
contexts.stream()
.map(u -> u.stream().mapToInt(i -> i).toArray())
.toArray(int[][]::new));
// Create negatives NDArray
maxLen = 0;
for (ArrayList<Integer> negative : negatives) {
maxLen = Math.max(maxLen, negative.size());
}
// Fill arrays with 0s to all have same lengths and be able to create NDArray
for (ArrayList<Integer> negative : negatives) {
while (negative.size() != maxLen) {
negative.add(0);
}
}
NDArray negativesNDArray =
manager.create(
negatives.stream()
.map(u -> u.stream().mapToInt(i -> i).toArray())
.toArray(int[][]::new));
return new NDList(centersNDArray, contextsNDArray, negativesNDArray);
}
public static Pair<ArrayDataset, Vocab> loadDataPTB(
int batchSize, int maxWindowSize, int numNoiseWords, NDManager manager)
throws IOException, TranslateException {
String[][] sentences = readPTB();
Vocab vocab = new Vocab(sentences, 10, new String[] {});
String[][] subSampled = subSampling(sentences, vocab);
Integer[][] corpus = new Integer[subSampled.length][];
for (int i = 0; i < subSampled.length; i++) {
corpus[i] = vocab.getIdxs(subSampled[i]);
}
Pair<ArrayList<Integer>, ArrayList<ArrayList<Integer>>> pair =
getCentersAndContext(corpus, maxWindowSize);
ArrayList<ArrayList<Integer>> negatives =
getNegatives(pair.getValue(), corpus, numNoiseWords);
NDList ndArrays =
convertNDArray(new Object[] {pair.getKey(), pair.getValue(), negatives}, manager);
ArrayDataset dataset =
new ArrayDataset.Builder()
.setData(ndArrays.get(0), ndArrays.get(1), ndArrays.get(2))
.optDataBatchifier(
new Batchifier() {
@Override
public NDList batchify(NDList[] ndLists) {
return batchifyData(ndLists);
}
@Override
public NDList[] unbatchify(NDList ndList) {
return new NDList[0];
}
})
.setSampling(batchSize, true)
.build();
return new Pair<>(dataset, vocab);
}
Let us print the first minibatch of the dataset.
Pair<ArrayDataset, Vocab> datasetVocab = loadDataPTB(512, 5, 5, manager);
ArrayDataset dataset = datasetVocab.getKey();
vocab = datasetVocab.getValue();
Batch batch = dataset.getData(manager).iterator().next();
for (int i = 0; i < batch.getData().size(); i++) {
System.out.println(names[i] + " shape: " + batch.getData().get(i).getShape());
}
centers shape: (512, 1)
contexts_negatives shape: (512, 48)
masks shape: (512, 48)
labels shape: (512, 48)
14.3.5. Summary¶
Subsampling attempts to minimize the impact of high-frequency words on the training of a word embedding model.
We can pad examples of different lengths to create minibatches with examples of all the same length and use mask variables to distinguish between padding and non-padding elements, so that only non-padding elements participate in the calculation of the loss function.
14.3.6. Exercises¶
We use the
batchifyData
function to specify the minibatch reading method for theArrayDataset
instance iterator and print the shape of each variable in the first batch read. How should these shapes be calculated?