Commit 81eff05
authored
[tests] switch lm_eval invocation to use pre-loaded transformers model (#2018)
SUMMARY:
`lm_eval==0.4.9.1` has a broken entrypoint when using a model with a
compressed-tensors quantization config with `--model hf`:
```
FAILED tests/lmeval/test_lmeval.py::TestLMEval::test_lm_eval[tests/lmeval/configs/vl_w4a16_actorder_weight.yaml] - ValueError: The model is quantized with CompressedTensorsConfig but you are passing a dict config. Please make sure to pass the same quantization config class to `from_pretrained` with different loading attributes.
```
It has been resolved on main, though a separate issue persists that is
resolved with this PR --
EleutherAI/lm-evaluation-harness#3393.
While that is in transit, and to avoid having to use lm_eval main in our
ci/cd, this PR resolves the issue by pre-loading the model with
`AutoModelForCausalLM` rather than relying on lm_eval's strange model
loading logic.
TEST PLAN:
tests run now, for some reason the vl test is super slow on ibm-h100-1.
The same thing happens on main. I've seen this before, but I'm not sure
what's causing it. It seemed to correct itself the following day
---------
Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>1 parent 63c175b commit 81eff05
2 files changed
+40
-43
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
10 | 10 | | |
11 | 11 | | |
12 | 12 | | |
13 | | - | |
14 | | - | |
15 | | - | |
16 | | - | |
17 | | - | |
| 13 | + | |
18 | 14 | | |
19 | | - | |
20 | | - | |
21 | | - | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
22 | 19 | | |
23 | 20 | | |
24 | 21 | | |
| |||
41 | 38 | | |
42 | 39 | | |
43 | 40 | | |
44 | | - | |
45 | | - | |
46 | | - | |
| 41 | + | |
| 42 | + | |
47 | 43 | | |
48 | 44 | | |
49 | 45 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
13 | 13 | | |
14 | 14 | | |
15 | 15 | | |
16 | | - | |
| 16 | + | |
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
| |||
35 | 35 | | |
36 | 36 | | |
37 | 37 | | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
38 | 42 | | |
39 | 43 | | |
40 | 44 | | |
| |||
120 | 124 | | |
121 | 125 | | |
122 | 126 | | |
123 | | - | |
| 127 | + | |
124 | 128 | | |
125 | 129 | | |
126 | 130 | | |
| |||
145 | 149 | | |
146 | 150 | | |
147 | 151 | | |
148 | | - | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
149 | 160 | | |
150 | 161 | | |
151 | 162 | | |
152 | 163 | | |
153 | | - | |
| 164 | + | |
154 | 165 | | |
155 | | - | |
| 166 | + | |
| 167 | + | |
| 168 | + | |
| 169 | + | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
156 | 177 | | |
157 | 178 | | |
158 | | - | |
159 | | - | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
160 | 184 | | |
161 | 185 | | |
162 | 186 | | |
163 | | - | |
164 | 187 | | |
165 | 188 | | |
166 | 189 | | |
| |||
181 | 204 | | |
182 | 205 | | |
183 | 206 | | |
184 | | - | |
185 | | - | |
186 | | - | |
187 | | - | |
188 | | - | |
189 | | - | |
190 | | - | |
191 | | - | |
192 | | - | |
193 | | - | |
194 | | - | |
195 | | - | |
196 | | - | |
197 | | - | |
198 | | - | |
199 | | - | |
200 | | - | |
201 | | - | |
202 | | - | |
203 | | - | |
204 | | - | |
205 | | - | |
206 | | - | |
| 207 | + | |
207 | 208 | | |
208 | | - | |
| 209 | + | |
209 | 210 | | |
210 | 211 | | |
211 | 212 | | |
| |||
0 commit comments