Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions .codespellrc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ skip = ./build,
*.log,
*.vqm,
*.blif,
*.xml,
*.pm,
# Special case: Pearl scripts are not being maintained.
*.pl,
# External projects that do not belong to us.
./libs/EXTERNAL,
./parmys,
Expand All @@ -16,8 +20,11 @@ skip = ./build,
./ace2,
./blifexplorer,
./verilog_preprocessor,
# WIP spelling cleanups.
./vtr_flow,
./vtr_flow/scripts/perl_libs,
./vtr_flow/scripts/benchtracker,
# Large testing directories.
./vtr_flow/benchmarks,
./vtr_flow/tasks,
# Temporary as we wait for some PRs to merge.
*_graph_uxsdcxx_capnp.h,
./vpr/src/route/rr_graph_generation/rr_graph.cpp,
Expand Down
2 changes: 1 addition & 1 deletion vtr_flow/arch/titan/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Adding Support for New Architectures
Support can be added for additional Quartus II supported FPGA architectures
(Cyclone III, Stratix II etc), by defining models for the architecture's VQM
primitives. Good places to look for this information include:
* Altera's Quartus Univeristy Interface Program (QUIP) documentation
* Altera's Quartus University Interface Program (QUIP) documentation
* The 'fv_lib' directory under a Quartus installation

For more details see vqm_to_blif's README.txt
2 changes: 1 addition & 1 deletion vtr_flow/arch/zeroasic/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

These are the VTR captures of the Zero ASIC architectures.

The orginal Zero ASIC architectures can be found in logiklib here:
The original Zero ASIC architectures can be found in logiklib here:
https://github.com/siliconcompiler/logiklib

These architectures have been slightly modified to work with VTR's CAD flow
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ crit_path_route_time;RangeAbs(0.10,10.0,2)

#Peak memory
#We set a 100MiB minimum threshold since the memory
#alloctor (e.g. TBB vs glibc) can cause a difference
#allocator (e.g. TBB vs glibc) can cause a difference
#particularly on small benchmarks
max_vpr_mem;RangeAbs(0.8,1.35,102400)
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,6 @@ crit_path_route_time;RangeAbs(0.10,10.0,2)

#Peak memory
#We set a 100MiB minimum threshold since the memory
#alloctor (e.g. TBB vs glibc) can cause a difference
#allocator (e.g. TBB vs glibc) can cause a difference
#particularly on small benchmarks
max_vpr_mem;RangeAbs(0.8,1.203,102400)
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ min_chan_width_route_time;RangeAbs(0.10,15.0,3)

#Peak memory
#We set a 100MiB minimum threshold since the memory
#alloctor (e.g. TBB vs glibc) can cause a difference
#allocator (e.g. TBB vs glibc) can cause a difference
#particularly on small benchmarks
#
#Note that due to different binary search path, peak memory
#can differ significantly during binary search (e.g. a larger
#or smaller channel width explored during the search can
#significantly affect the size of the RR graph, and correspondingly
#peak mememory usage in VPR. As a result we just a larger permissible
#peak memory usage in VPR. As a result we just a larger permissible
#range for peak memory usage.
max_vpr_mem;RangeAbs(0.5,2.0,102400)
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,13 @@ min_chan_width_route_time;RangeAbs(0.05,15.0,4)

#Peak memory
#We set a 100MiB minimum threshold since the memory
#alloctor (e.g. TBB vs glibc) can cause a difference
#allocator (e.g. TBB vs glibc) can cause a difference
#particularly on small benchmarks
#
#Note that due to different binary search path, peak memory
#can differ significantly during binary search (e.g. a larger
#or smaller channel width explored during the search can
#significantly affect the size of the RR graph, and correspondingly
#peak mememory usage in VPR. As a result we just a larger permissible
#peak memory usage in VPR. As a result we just a larger permissible
#range for peak memory usage.
max_vpr_mem;RangeAbs(0.5,2.0,102400)
Original file line number Diff line number Diff line change
@@ -1 +1 @@
#VPR metrix at relaxed (relative to minimum) channel width
#VPR metrics at relaxed (relative to minimum) channel width
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#VPR metrix at relaxed (relative to minimum) channel width with timing
#VPR metrics at relaxed (relative to minimum) channel width with timing
%include "../common/pass_requirements.vpr_route_relaxed_chan_width.txt"

#Routing Metrics
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#VPR metrix at relaxed (relative to minimum) channel width with timing
#VPR metrics at relaxed (relative to minimum) channel width with timing
%include "../common/pass_requirements.vpr_route_relaxed_chan_width_small.txt"

#Routing Metrics
Expand Down
10 changes: 5 additions & 5 deletions vtr_flow/primitives.lib
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ library (VTRPrimitives) {
*
* INPUTS:
* datain
* OUPUTS:
* OUTPUTS:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with this .lib file so I'm not sure if it's safe to change this. Is the functionality here tested to make sure it's okay?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a Liberty file. They are a very strange "standard" and they are not often written by hand this way. They allow C++ style block-comments. This is within a block-comment. So this will not make a functional change.

* dataout
*/
cell (fpga_interconnect) {
Expand Down Expand Up @@ -125,7 +125,7 @@ library (VTRPrimitives) {
* The LUT mask that defines the output of the LUT as a function
* of the input. mask[0] is the output if all the inputs are 0, and
* mask[2^k - 1] is the output if all the inputs are 1.
* OUPUTS:
* OUTPUTS:
* out
*/
cell (LUT_4) {
Expand Down Expand Up @@ -171,7 +171,7 @@ library (VTRPrimitives) {
* The LUT mask that defines the output of the LUT as a function
* of the input. mask[0] is the output if all the inputs are 0, and
* mask[2^k - 1] is the output if all the inputs are 1.
* OUPUTS:
* OUTPUTS:
* out
*/
cell (LUT_5) {
Expand Down Expand Up @@ -217,7 +217,7 @@ library (VTRPrimitives) {
* The LUT mask that defines the output of the LUT as a function
* of the input. mask[0] is the output if all the inputs are 0, and
* mask[2^k - 1] is the output if all the inputs are 1.
* OUPUTS:
* OUTPUTS:
* out
*/
cell (LUT_6) {
Expand Down Expand Up @@ -262,7 +262,7 @@ library (VTRPrimitives) {
* edge.
* clock:
* The clock signal for the DFF.
* OUPUTS:
* OUTPUTS:
* Q:
* The current value stored in the latch.
* QN:
Expand Down
12 changes: 6 additions & 6 deletions vtr_flow/primitives.v
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@
//If you wish to do back-annotated timing simulation you will need
//to link with this file during simulation.
//
//To ensure currect result when performing back-annoatation with
//To ensure correct result when performing back-annoatation with
//Modelsim see the notes at the end of this comment.
//
//Specifying Timing Edges
//=======================
//To perform timing back-annotation the simulator must know the delay
//dependancies (timing edges) between the ports on each primitive.
//dependencies (timing edges) between the ports on each primitive.
//
//During back-annotation the simulator will attempt to annotate SDF delay
//values onto the timing edges. It should give a warning if was unable
Expand All @@ -33,7 +33,7 @@
// (in[1] => out[1]) = "";
// endspecify
//
//This states that there are the following timing edges (dependancies):
//This states that there are the following timing edges (dependencies):
// * from in[0] to out[0]
// * from in[1] to out[1]
//
Expand Down Expand Up @@ -62,7 +62,7 @@
// (in *> out) = "";
// endspecify
//
//states that there are the following timing edges (dependancies):
//states that there are the following timing edges (dependencies):
// * from in[0] to out[0]
// * from in[0] to out[1]
// * from in[0] to out[2]
Expand Down Expand Up @@ -91,11 +91,11 @@
//This forces it to apply specify statements using multi-bit operands to
//each bit of the operand (i.e. according to the Verilog standard).
//
//Confirming back-annotation is occuring correctly
//Confirming back-annotation is occurring correctly
//------------------------------------------------
//
//Another useful option is '+sdf_verbose' which produces extra output about
//SDF annotation, which can be used to verify annotation occured correctly.
//SDF annotation, which can be used to verify annotation occurred correctly.
//
//For example:
//
Expand Down
4 changes: 2 additions & 2 deletions vtr_flow/scripts/download_ispd.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def parse_args():
"--force",
default=False,
action="store_true",
help="Run extraction step even if directores etc. already exist",
help="Run extraction step even if directories etc. already exist",
)

parser.add_argument(
Expand Down Expand Up @@ -114,7 +114,7 @@ def main():
print("File corrupt:", e)
sys.exit(2)
except ExtractionError as e:
print("Failed to extrac :", e)
print("Failed to extract :", e)
sys.exit(3)

sys.exit(0)
Expand Down
4 changes: 2 additions & 2 deletions vtr_flow/scripts/download_noc_mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

class ExtractionError(Exception):
"""
Raised when extracting the downlaoded file fails
Raised when extracting the downloaded file fails
"""


Expand Down Expand Up @@ -55,7 +55,7 @@ def parse_args():
"--force",
default=False,
action="store_true",
help="Run extraction step even if directores etc. already exist",
help="Run extraction step even if directories etc. already exist",
)
parser.add_argument(
"--full_archive",
Expand Down
2 changes: 1 addition & 1 deletion vtr_flow/scripts/download_symbiflow.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def parse_args():
"--force",
default=False,
action="store_true",
help="Run extraction step even if directores etc. already exist",
help="Run extraction step even if directories etc. already exist",
)

parser.add_argument("--mirror", default="google", choices=["google"], help="Download mirror")
Expand Down
2 changes: 1 addition & 1 deletion vtr_flow/scripts/download_titan.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ def parse_args():
"--force",
default=False,
action="store_true",
help="Run extraction step even if directores etc. already exist",
help="Run extraction step even if directories etc. already exist",
)
parser.add_argument(
"--device_family",
Expand Down
24 changes: 12 additions & 12 deletions vtr_flow/scripts/noc/noc_benchmark_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
POST_ROUTED_FREQ = "Post Route Freq (MHz): "
ROUTE_TIME = "Route Time (s): "

# phrases to identify lines that contain palcement data
# phrases to identify lines that contain placement data
PLACEMENT_COST_PHRASE = "Placement cost:"
NOC_PLACEMENT_COST_PHRASE = "NoC Placement Costs."
PLACEMENT_TIME = "# Placement took"
Expand Down Expand Up @@ -87,8 +87,8 @@ def noc_test_command_line_parser(prog=None):

Run the NoC driven placement on a design located at
./noc_test_circuits (design should be in .blif format).
Where we want to run 5 seeds (5 seperate runs)
using 3 threads (running 3 seperate runs of VPR in parallel).
Where we want to run 5 seeds (5 separate runs)
using 3 threads (running 3 separate runs of VPR in parallel).
For more information on all options run program with '-help'
parameter.

Expand Down Expand Up @@ -120,7 +120,7 @@ def noc_test_command_line_parser(prog=None):
"-arch_file",
default="",
type=str,
help="The architecture file the NoC benchamrk designs are placed on",
help="The architecture file the NoC benchmark designs are placed on",
)

parser.add_argument("-vpr_executable", default="", type=str, help="The executable file of VPR")
Expand Down Expand Up @@ -250,7 +250,7 @@ def process_vpr_output(vpr_output_file):

open_file = open(vpr_output_file)

# datastrcuture below stors the palcement data in a disctionary
# datastructure below stors the placement data in a dictionary
placement_data = {}

# process each line from the VPR output
Expand Down Expand Up @@ -291,7 +291,7 @@ def process_vpr_output(vpr_output_file):

def process_placement_costs(placement_data, line_with_data):
"""
Given a string which contains palcement data. Extract the total
Given a string which contains placement data. Extract the total
placement cost and wirelength cost.
"""

Expand All @@ -308,7 +308,7 @@ def process_placement_costs(placement_data, line_with_data):
# 1st element is the overall placement cost, second element is the
# placement bb cost and the third element is the placement td cost.
#
# Covert them to floats and store them (we don't care about the td cost so # ignore it)
# Convert them to floats and store them (we don't care about the td cost so # ignore it)
placement_data[PLACE_COST] = float(found_placement_metrics.group(1))
placement_data[PLACE_BB_COST] = float(found_placement_metrics.group(2))

Expand Down Expand Up @@ -446,10 +446,10 @@ def check_for_constraints_file(design_file):

def gen_vpr_run_command(design_file, design_flows_file, user_args):
"""
Generate a seperate VPR run commands each with a unique placement
Generate a separate VPR run commands each with a unique placement
seed value. The number of commands generated is equal to the number
of seeds the user requested to run.
For each run we generate seperate '.net' files. This was needed
For each run we generate separate '.net' files. This was needed
since a single net file caused failures when multiple concurrent
VPR runs tried accessing the file during placement.
"""
Expand Down Expand Up @@ -620,7 +620,7 @@ def process_vpr_runs(run_args, num_of_seeds, route):
place_param: value / num_of_seeds for place_param, value in vpr_average_place_data.items()
}

# need to divide the NoC latency cost by the weighting to conver it to
# need to divide the NoC latency cost by the weighting to convert it to
# physical latency
vpr_average_place_data[NOC_LATENCY_COST] = (
vpr_average_place_data[NOC_LATENCY_COST] / latency_weight
Expand All @@ -638,7 +638,7 @@ def print_results(parsed_data, design_file, user_args):
results_file_name = os.path.join(os.getcwd(), results_file_name + ".txt")
results_file = open(results_file_name, "w+")

# write out placement info individually in seperate lines
# write out placement info individually in separate lines
results_file.write("Design File: {0}\n".format(design_file))
results_file.write("Flows File: {0}\n".format(user_args.flow_file))

Expand All @@ -664,7 +664,7 @@ def execute_vpr_and_process_output(vpr_command_list, num_of_seeds, num_of_thread
for single_vpr_command in vpr_command_list:

# generate VPR output file_name
# the constants represent the positions of the variabels in the command list
# the constants represent the positions of the variables in the command list
design_file_name = single_vpr_command[2]
seed_val = single_vpr_command[18]
vpr_out_file = "{0}.{1}.vpr.out".format(design_file_name, seed_val)
Expand Down
6 changes: 3 additions & 3 deletions vtr_flow/scripts/python_libs/vtr/abc/abc.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ def run_lec(
The reference netlist to be commpared to

implementation_netlist :
The implemeted netlist to compare to the reference netlist
The implemented netlist to compare to the reference netlist


Other Parameters
Expand Down Expand Up @@ -419,8 +419,8 @@ def run_lec(

def check_abc_lec_status(output):
"""
Reads abc_lec output and determines if the files were equivelent and
if there were errors when preforming lec.
Reads abc_lec output and determines if the files were equivalent and
if there were errors when performing lec.
"""
equivalent = None
errored = False
Expand Down
Loading