Touch Sensitivity Tester

Test swipe latency and touch event smoothness with an exportable report.

Touch Sensitivity Tester

Test swipe latency and touch event smoothness in this browser.

This label is included in the exported report so you can compare runs.
Run a short session and perform multiple swipes for more reliable percentiles.
Filters tiny jitter right after touch down (5–8px is a solid default).
Only affects the chart rendering, not the raw timing metrics.
Useful for troubleshooting inconsistent gestures across devices.
Tip: For cleaner comparisons, run 2–3 sessions under the same conditions and compare p90 latency.
Processing…
No test session yet
Configure settings and click Start test. On the next screen, swipe inside the test area for the chosen duration.
What you will get: average and percentile swipe latency, move-event rate, jitter, and an optional event log you can copy or download.
Copied

About Touch Sensitivity Tester

Touch Sensitivity Tester – swipe latency test

Use this Touch Sensitivity Tester to measure how responsive swipe gestures feel in your current browser and device setup. It focuses on swipe latency (the delay between touching the screen and your first meaningful movement) plus the smoothness of move events during a gesture.

Touch responsiveness is shaped by many layers: the digitizer and touch firmware, the operating system’s input pipeline, browser scheduling, the page’s main-thread workload, and even accessibility filters. That’s why “it feels laggy today” is often hard to reproduce. A quick, repeatable test gives you a baseline you can compare when you switch browsers, enable a high refresh rate, update your OS, or change power and accessibility settings.

This tool is designed for practical comparison. Run short sessions under consistent conditions, then compare averages and percentiles to understand both typical and worst-case behavior. You can copy the results into a bug report or download a report for QA notes.

How It Works

The tool opens a dedicated swipe surface and records high‑resolution timestamps for pointer events using the browser’s performance clock. Each gesture is treated as a “swipe attempt” from pointer down to pointer up. The report summarizes timing and variability across all swipes captured during your timed session.

During the session you can perform several swipes—short flicks, longer drags, and diagonal motions. This variety often reflects real usage better than a single, perfect swipe. After the timer ends, the tool aggregates all captured gestures into a single report so you can compare runs side-by-side.

What the tester measures

  • First‑move latency: time from touch down to the first movement that exceeds a small distance threshold (to ignore micro jitter and sensor noise).
  • Move event cadence: intervals between consecutive move events while your finger (or mouse) is in motion. A steadier cadence usually feels smoother.
  • Jitter: variability in move intervals; higher jitter can feel “stuttery” even if average latency is low.
  • Gesture coverage: how many complete swipes were captured, plus approximate motion totals. More data makes percentiles more meaningful.
  • Percentiles (p50, p90): not just the average, but how bad the slower swipes get. The p90 is often closer to “how it feels on a bad swipe.”

All calculations run locally in your browser. No event data is sent anywhere. The optional event log stays on the page, and the downloadable report is generated locally as a plain text file you control.

Keep in mind that the tool measures the browser’s perspective—when events are delivered to the page—not the raw electrical sensing time inside the digitizer. For web apps and mobile web UX, that perspective is often exactly what you want, because it reflects how quickly your interface can react.

Key Features

Session-based testing for repeatable comparisons

Instead of focusing on a single gesture, you run a timed session (for example 5–10 seconds) and perform multiple swipes in a consistent area. This makes it easier to compare “before” and “after” changes because the sampling window stays the same. If you are debugging an issue, you can run three short sessions and look for a pattern rather than trusting one outlier.

Configurable movement threshold

A small movement threshold filters out tiny unintentional motion immediately after contact. Many screens report slight movement even when you try to hold still, especially when the finger lands at an angle. By requiring movement beyond the threshold before counting the first move, the reported latency aligns better with what people perceive as the start of a swipe.

Clear metrics with practical interpretation

The output includes average latency and percentiles, move event rates, and jitter. A single number rarely tells the whole story: for example, a low average with a high p90 indicates occasional spikes that can feel like “random lag.” Likewise, a normal latency with a low move event rate can make swipes feel less precise. The report is structured so you can paste it into QA notes or attach it to a support ticket.

Optional event log for troubleshooting

When enabled, the tool records a compact timeline of gesture events (down/move/up) with timestamps. This is useful when diagnosing inconsistent gesture handling across browsers or when checking whether a device is dropping move events under load. If you are a developer, the log can also help you confirm that pointer events are firing as expected and that your device is not switching pointer types mid-gesture.

Visual latency chart

An optional chart visualizes latency distribution. Charts are especially helpful when you are comparing two runs: a “tight cluster” of latencies suggests consistent responsiveness, while a long tail indicates occasional slow starts. Even non-technical stakeholders can understand a chart that shows “most swipes are fast, but some are noticeably slow.”

Works on touch, trackpad, and mouse

The swipe surface uses Pointer Events, so you can test on phones, tablets, and desktops. While mouse and trackpad behavior is not identical to touch hardware, this makes the tool convenient for quick checks during development, when you might not have a device connected.

Combined, these features make the tool useful for both casual checks and structured QA. You can run it after a browser update, after enabling a new accessibility setting, or during performance profiling to see whether main-thread work is affecting gesture responsiveness.

Use Cases

  • Mobile QA and web app testing: validate that a swipe-heavy UI (carousels, drawers, maps, calendars) feels consistent across devices and browsers.
  • Comparing performance settings: test the impact of high refresh rate modes, battery saver, reduced motion, or background tabs.
  • Diagnosing input lag complaints: collect a repeatable report when users say “swipes feel delayed” and attach it to a support ticket.
  • Regression checks after updates: run a baseline session before and after OS or browser updates to see whether touch delivery changed.
  • Accessibility and assistive tech checks: understand how touch filters, magnifiers, or other assistive tools affect cadence and gesture recognition.
  • Hardware troubleshooting: spot unusually high first‑move latency or very low move event rates that might indicate digitizer issues.
  • Remote support: ask a user to run the test and share the report so you can compare their results with expected baselines.

For development teams, the tool can also serve as a sanity check when implementing custom gesture recognition. If your app’s own swipe handler feels slow, you can confirm whether the raw event delivery is steady. If delivery looks healthy but your UI still lags, the bottleneck may be rendering or heavy JavaScript work rather than touch input.

For end users, the tool helps answer practical questions: “Is my phone dropping events when battery saver is on?” “Does this browser feel smoother?” “Did my latest update change input responsiveness?” A quick test session can make these comparisons less subjective.

When comparing results, pay special attention to the number of swipes captured. Percentiles become more informative with more samples. If you only captured two swipes, p90 is not meaningful. If you captured ten or more swipes, p50 and p90 give a better view of typical and worst-case behavior.

Optimization Tips

Keep the environment consistent

Close heavy tabs, pause downloads, and keep brightness and refresh rate stable while testing. If you are comparing browsers, use the same network conditions and avoid switching apps mid-session. Even small background interruptions—like an incoming call banner or a notification—can create spikes that show up in the p90.

Choose a realistic session length

Short sessions (5–8 seconds) are usually enough for quick checks. Longer sessions can be helpful if you suspect sporadic lag. A good strategy is to run three short sessions rather than one long session; this reduces fatigue and makes it easier to spot a trend.

Swipe naturally and vary your gestures

Swipe the way you normally interact with your device. Very slow drags can change event cadence, while extremely fast flicks may reduce the number of captured move events. If your real app includes both short flicks and long drags, try to include both in the session to see whether one style triggers worse latency.

Interpret latency together with jitter

A low average latency with high jitter can still feel inconsistent. If jitter is high, try repeating the test with fewer background tasks or on a different browser version. If the move event rate is low, check whether a power-saving mode is limiting refresh rate or whether the device is thermally throttling.

Use percentiles to spot “spiky” behavior

If average latency looks fine but the p90 is noticeably higher, you may have occasional stalls. In web apps this can happen when the main thread is busy (layout thrashing, long tasks, heavy animations). Consider testing again while keeping the page idle to see whether the spikes disappear.

Finally, remember that this tool is best for relative comparisons. The most actionable insights come from comparing your device under different settings or comparing two browsers on the same device. Treat the report as a diagnostic snapshot, not an absolute hardware rating.

FAQ

This is a web benchmark. It measures how quickly your browser delivers pointer events and how steady those events are during swipes. It can reflect hardware and OS differences, but the results are specific to the browser environment you test in.

Touch responsiveness can change with CPU load, thermal throttling, power-saving modes, and background tasks. Even minor differences—like a notification arriving mid-test—can add occasional spikes. Run multiple sessions and compare percentiles to judge consistency.

A threshold of 5–8 pixels is a good starting point for most screens. Lower values can treat tiny sensor jitter as movement and report unrealistically low latency. Higher values can delay the first-move detection if you start very gently.

Higher refresh rates can improve perceived smoothness, and some devices deliver input events more frequently in high refresh modes. This tool can help you see changes in move event cadence and jitter, though exact behavior depends on device and browser.

Yes. The swipe surface uses Pointer Events, so mouse and trackpad drags are captured similarly to touch. Results may not translate directly to touch hardware, but it is useful for checking browser event delivery and gesture responsiveness.

Why Choose This Tool

Many “touch tests” are visual demos that don’t quantify anything. This Touch Sensitivity Tester gives you concrete numbers—first‑move latency, event rate, and jitter—so you can compare devices, browsers, or settings with less guesswork. The session approach also makes it easier to reproduce issues and communicate them clearly to teammates, because it encourages multiple swipes and focuses on percentiles rather than a single best-case gesture.

If you build swipe-driven interfaces, you know that responsiveness is not just about frames per second. It is about how quickly the page receives input and how evenly that input arrives while a finger is moving. These metrics help you separate input delivery issues from rendering problems. If latency and cadence look healthy but the UI feels slow, you may need to optimize animations, reduce long tasks, or avoid forced synchronous layout. If latency is high or jittery, the issue may be a system-level setting, a background load problem, or a browser-specific behavior worth reporting.

Whether you are validating a new device, troubleshooting a user report, or setting up a lightweight QA checklist, this tool provides a fast, exportable snapshot of swipe responsiveness. Run a few sessions, download the report, and you have a baseline you can revisit whenever the environment changes.