Direct answer: find sensitivity by testing cm/360 ranges against the skills you actually need: static clicking, dynamic clicking, tracking, target switching, and game transfer. Do not copy a pro value blindly and do not change sensitivity after one bad session.
In-game sensitivity numbers are not comparable across games unless you convert them through yaw values and DPI. cm/360 is the physical distance your mouse travels for one full 360 degree turn. It is the most useful common language because it survives game changes, DPI changes, and trainer changes. A player can say "I use roughly 43 cm/360" and then rebuild that feel in Valorant, CS2, Apex, or an aim trainer with the correct converter.
The practical formula is: cm/360 = (360 / (DPI x in-game sensitivity x yaw)) x 2.54. The yaw value is game-specific, so a sensitivity number that looks low in one game may be normal in another. FOV does not change physical cm/360, but it changes how fast the screen appears to move, which affects comfort. That is why the final step must always happen in the game or a game-specific trainer preset.
Pick three values: one lower than your current sensitivity, one current or middle value, and one higher value. Test each for static clicking, smooth tracking, and target switching. Keep the order random enough that warm-up does not favor one value. Use two runs per category and write down accuracy, score, and tension. The winner is not the highest score if the mouse hand feels tense or the movement falls apart in transfer.
Lower sensitivity often helps small static corrections and long-range precision. Higher sensitivity often helps wide turns, close target switching, and games with heavy verticality. The correct value is the one that gives the least damaging tradeoff for your main game. A CS2 rifler can accept slower 180s if first-bullet placement improves. An Apex player cannot choose a value that makes close-range tracking impossible.
After choosing a range, lock it for at least two weeks unless pain or hardware constraints appear. Run the same warm-up, benchmark once per week, and review VOD for correction distance. If you change sensitivity every time a benchmark score drops, you never learn whether the problem is technique, fatigue, or the number itself.
Voltaic benchmarks are useful because they split aim into categories instead of treating aim as one number. Use the same idea for sensitivity. If a value improves static clicking but damages tracking, the result matters differently depending on your game. Use static clicking, smoothness tracking, target switching, and click timing to test the four practical categories. Then run a game-specific warm-up from the warm-up page.
Aim Lab's benchmark and regimen articles support the same loop: test, identify weak categories, train those categories, and retest. Kovaak's deep scenario library lets you test narrow variants once you understand the problem. The methodology is more important than the app. You are using controlled drills to make a decision, not hunting for a magical sensitivity.
| Test | Low value | Middle value | High value | Notes |
|---|---|---|---|---|
| Static clicking | accuracy / tension | accuracy / tension | accuracy / tension | Look for overflicks and undershoots. |
| Smooth tracking | uptime / jitter | uptime / jitter | uptime / jitter | Look for tremor and reacquisition time. |
| Target switching | accuracy / route | accuracy / route | accuracy / route | Look for clean stop-start control. |
| Game transfer | VOD note | VOD note | VOD note | Look at first correction distance in real fights. |
Tactical shooters usually reward a sensitivity range that allows stable first bullets and small crosshair placement corrections. That does not mean every Valorant or CS2 player needs the same cm/360. It means your test should include microflicks, static clicking, and counter-strafe or stop-shoot transfer. If a high value makes your first bullet fast but unstable, the score is not worth the tradeoff. If a low value makes every close turn late, the precision is not worth the tradeoff either.
Tracking-heavy games usually need a range that lets the hand follow without constant mouse lifts. Apex and Overwatch fights often last longer than one click, so comfort and smoothness matter as much as first-shot accuracy. Test smooth tracking, reactive tracking, and strafe tracking before deciding. If a sensitivity makes you hold a target smoothly but prevents quick target switches, it may still fail in real fights. If it makes target switches easy but creates jitter on long beams, it may be too high for your role.
Fortnite and other mixed-mechanic games need the most balanced test. A useful Fortnite sensitivity must handle shotgun timing, vertical changes, close tracking, and fast target switching. This is why copying one public setting is weak methodology. The same value can feel good in an aim trainer and bad during an edit fight because camera movement, target size, and pressure are different.
Use the same mousepad area, posture, warm-up, and drill order every time. Run the low, middle, and high values in a rotated order so the last value does not always benefit from warm-up. For each value, run static clicking, smooth tracking, target switching, and game transfer. Write down three things: score, accuracy, and tension. Then add one sentence about the most common miss. The sentence is often more valuable than the score because it tells you what the sensitivity changed.
After the first test, remove the worst value and create two smaller steps around the best value. Repeat the test on another day. If the same range wins twice, lock it for two weeks. If two ranges trade wins, choose the one that feels better in your main game, not the one that produced one higher trainer score. Main-game transfer is the final judge because the sensitivity must survive movement, recoil, peeks, and pressure.
When you retest after two weeks, do not expect every score to rise. A good sensitivity can make one category stable while another category requires technique work. If static clicking remains weak across every value, the problem is likely click control or visual confirmation. If tracking remains weak across every value, the problem may be smoothness, tension, or target reading. Sensitivity testing should reveal technique problems, not hide them.
The most common failure mode in aim training is not laziness. It is unstructured repetition. A player opens a trainer, chooses a task that feels familiar, plays until the score stops rising, and then assumes the routine is complete. That process can warm the hand, but it does not reliably diagnose a weakness. This sensitivity methodology is meant to be used as a decision tool. Pick a category, define the skill being trained, run a small number of measured sets, and then connect the result to a game-specific transfer block.
A useful session has a short written target before it starts. For example: "reduce overshoot on microflicks," "hold smoother tracking through reversals," "confirm first bullet before switching," or "keep head height after recoil." The target should describe behavior, not a dream score. Scores are useful, but they are noisy. Behavior is easier to inspect in a recording and easier to transfer into the next match. If the score rises while the miss pattern remains the same, the routine needs adjustment.
Use a two-layer log. The first layer is numeric: score, accuracy, run length, target size, and sensitivity. The second layer is qualitative: main miss type, tension level, and transfer note. The transfer note is the bridge to the actual game. It might say "deathmatch showed crosshair still low after first kill" or "Apex range tracking felt smooth until target switched direction." Over a month, these notes show whether the training is changing the fight pattern or only improving isolated trainer comfort.
Retest on a schedule, not on emotion. If a bad ranked game sends you back to the benchmark page for five angry retests, the data will be useless. Use one planned retest per week for longer programs and one short retest after changing sensitivity or scenario difficulty. When a retest exposes a weakness, train that weakness for several sessions before testing again. This keeps the routine from turning into a scoreboard loop.
Finally, separate warm-up, training, and testing. Warm-up should be easy and short. Training should be specific and slightly uncomfortable. Testing should be standardized and infrequent. Mixing those three jobs creates confusion: a warm-up becomes tiring, a training block becomes a leaderboard chase, and a test becomes a tilted grind. The pages in this FPSTrain library are designed to keep those jobs separate while still linking them together through drills, routines, game warm-ups, and the progression roadmap.
Use source links as methodology anchors, not as decoration. Official benchmark pages, Kovaak's platform references, and Aimlabs routine articles are useful because they show how serious training ecosystems organize practice: categories, repeatable scenarios, leaderboards or progress tracking, and retesting. They do not remove the need for judgment. A scenario name can change, a benchmark season can change, and a player's main game can change. The durable part is the workflow: define the category, run comparable reps, inspect the miss pattern, and transfer the result.
If you are unsure where to start, choose the lowest-risk version of the routine. Lower target speed, slightly larger targets, shorter sets, and stricter accuracy requirements create better early data than a hard scenario played badly. Once the movement is clean, add pressure one variable at a time. This is the difference between a training plan and a pile of tasks. A plan makes the next decision easier; a pile of tasks only gives you more ways to be inconsistent.
This page uses official methodology references and avoids fake rank claims or invented testimonials.