Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions navi/thrift_bpr_adapter/thrift/src/decoder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ enum FeatureVal {
FloatVector(Vec<f32>),
}

// A Feture has a name and a value
// A Feature has a name and a value
// The name for now is 'id' of type string
// Eventually this needs to be flexible - example to accomodate feature-id
// Eventually this needs to be flexible - example to accommodate feature-id
struct Feature {
id: String,
val: FeatureVal,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ protected ConsumerRecords<K, V> poll() {

protected abstract void validateAndIndexRecord(ConsumerRecord<K, V> record);

// Shutdown hook which can be called from a seperate thread. Calling consumer.wakeup() interrupts
// Shutdown hook which can be called from a separate thread. Calling consumer.wakeup() interrupts
// the running indexer and causes it to first stop polling for new records before gracefully
// closing the consumer.
public void close() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
* opposite value of isPositive of the parent group.
*
* I'll try to break it down a bit further. Let's assume "a" and "b" are hf terms, and '
* "[hf_term_pair a b]" represents querying their co-occurence.
* "[hf_term_pair a b]" represents querying their co-occurrence.
* Query (* a b not_hf) can become (* [hf_term_pair a b] not_hf)
* Query (+ -a -b -not_hf) can become (+ -[hf_term_pair a b] -not_hf)
* These two rules represent the bulk of the rewrites that this class makes.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ import ch.qos.logback.core.spi.FilterReply
import com.twitter.tweetypie.serverutil.ExceptionCounter.isAlertable

/**
* This class is currently being used by logback to log alertable exceptions to a seperate file.
* This class is currently being used by logback to log alertable exceptions to a separate file.
*
* Filters do not change the log levels of individual loggers. Filters filter out specific messages
* for specific appenders. This allows us to have a log file with lots of information you will
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ case class TweetCacheWrite(
* If the tweet id is a snowflake id, this is an offset since tweet creation.
* If it is not a snowflake id, then this is a Unix epoch time in
* milliseconds. (The idea is that for most tweets, this encoding will make
* it easier to see the interval between events and whether it occured soon
* it easier to see the interval between events and whether it occurred soon
* after tweet creation.)
* - Cache action ("set", "add", "replace", "cas", "delete")
* - Base64-encoded Cached[CachedTweet] struct
Expand Down
2 changes: 1 addition & 1 deletion tweetypie/server/src/main/thrift/tweetypie_internal.thrift
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ struct TweetCacheWrite {
// If the tweet id is a snowflake id, this is an offset since tweet creation.
// If it is not a snowflake id, then this is a Unix epoch time in
// milliseconds. (The idea is that for most tweets, this encoding will make
// it easier to see the interval between events and whether it occured soon
// it easier to see the interval between events and whether it occurred soon
// acter tweet creation.)
2: required i64 timestamp (personalDataType = 'TransactionTimestamp')
3: required string action // One of "set", "add", "replace", "cas", "delete"
Expand Down
4 changes: 2 additions & 2 deletions twml/twml/argument_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -411,15 +411,15 @@ def get_trainer_parser():
action=parse_comma_separated_list(element_type=float),
default=None,
help="Required for 'piecewise_constant_values' learning_rate_decay. "
"A list of comma seperated floats or ints that specifies the values "
"A list of comma separated floats or ints that specifies the values "
"for the intervals defined by boundaries. It should have one more "
"element than boundaries.")
parser_piecewise_constant.add_argument(
"--piecewise_constant_boundaries",
action=parse_comma_separated_list(element_type=int),
default=None,
help="Required for 'piecewise_constant_values' learning_rate_decay. "
"A list of comma seperated integers, with strictly increasing entries.")
"A list of comma separated integers, with strictly increasing entries.")

# Create the parser for the "inverse_learning_rate_decay_fn"
parser_inverse = subparsers.add_parser('inverse_learning_rate_decay',
Expand Down
2 changes: 1 addition & 1 deletion twml/twml/contrib/layers/hashed_percentile_discretizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ class HashedPercentileDiscretizer(Layer):
Note that if an input feature is rarely used, so will its associated output bin/features.
The difference between this layer and PercentileDiscretizer is that the
DeterministicPercentileDiscretize always assigns the same output id in the SparseTensor to the
same input feature id + bin. This is useful if you want to user transfer learning on pre-trained
same input feature id + bin. This is useful if you want to use transfer learning on pre-trained
sparse to dense embedding layers, but re-calibrate your discretizer on newer data.
"""

Expand Down
2 changes: 1 addition & 1 deletion twml/twml/contrib/layers/hashing_discretizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ class HashingDiscretizer(Layer):
The difference between this layer and PercentileDiscretizer is that the
HashingDiscretizer always assigns the same output id in the
SparseTensor to the same input (feature id, bin) pair. This is useful if you
want to user transfer learning on pre-trained sparse to dense embedding
want to use transfer learning on pre-trained sparse to dense embedding
layers, but re-calibrate your discretizer on newer data.

If there are no calibrated features, then the discretizer will only apply
Expand Down