When people think about survey use cases in an LMS, they often assume the story starts with the survey itself; however, it usually does not. It starts with a much bigger institutional question: how do you collect meaningful feedback in a way that is reliable, scalable, and actually embedded in the learner experience?
That question sat at the heart of our recent webinar where Chamberlain University and Oregon State University shared how they use Qualtrics surveys in Canvas through Qualtrics LTI. On paper, these institutions are approaching the integration from very different angles. Chamberlain uses it to support end-of-course evaluations at scale. Oregon State University enables a more distributed model, where faculty and programs can deploy their own surveys inside Canvas for a range of use cases.
Different models, different ownership structures, different goals. And yet, the most interesting takeaway was not how different they are. It was how much they have in common.
At first glance, Chamberlain University and Oregon State University represent two distinct approaches.
At Chamberlain, the driver was centralised course evaluation. With tens of thousands of learners and more than 50,000 survey links being managed every eight weeks, the previous email-based process had become an administrative burden. Distribution issues, duplicate sends, bounced emails, and data integrity concerns all added friction to a process that should have been helping the institution listen better.
At Oregon State University, the challenge looked different. With a small LMS administration team supporting a large and complex university environment, the need was not centralisation. It was controlled decentralisation. Faculty and programs needed the ability to run more robust Qualtrics surveys in Canvas without depending on a central team every time. Native survey functionality in Canvas could not support the depth, flexibility, or response quality they needed.
So yes, the use cases differ.But the institutional need underneath them is remarkably similar: both institutions needed a way to make feedback collection easier to deploy, easier to complete, and more trustworthy in its outcomes.
One of the strongest themes from the conversation was that embedding Qualtrics directly into Canvas does far more than simplify distribution.
It changes behaviour. When a survey lives in email, it competes with everything else in a learner’s inbox. It is easier to miss, easier to ignore, and easier to disconnect from the course experience it is meant to support.
When a survey lives inside Canvas as an assignment, it becomes part of the rhythm of learning. It shows up where learners already are. It can appear on a to-do list. It can be tied to timing, expectations, and course flow. It feels less like an external interruption and more like a natural part of participation.
That shift matters. For Chamberlain, it translated into a substantial increase in course evaluation response rates, moving from response rates in the high 30% range through email to the high 50s and even low 60s after implementation. That is not just an operational improvement. It is a data quality improvement. More responses reduce non-response bias and create a more representative picture of the learner experience.
For Oregon State University, the impact was equally important, even if the model was different. Embedding surveys as Canvas assignments improved completion rates while allowing faculty to use more advanced survey logic, required responses, and even multiple submissions where needed. In other words, the integration did not just increase participation. It made the survey instrument itself more useful.
It is tempting to stop at the response-rate story, because those results are tangible and easy to point to.
But what both institutions really highlighted is that the bigger value lies in what better feedback enables.
At Chamberlain, stronger participation and cleaner processes supported more reliable data for course evaluations, accreditation reporting, and continuous improvement efforts. Some smaller programs began receiving feedback where previously there had been little to none. That broader and more balanced evidence base matters when institutions need to show not just that they collect feedback, but that they use it meaningfully.
At Oregon State University, the value was in empowering ownership without sacrificing control. Surveys and data could remain with the right owner in Qualtrics, while deployment could happen through Canvas in a way that fit the institution’s operational reality. That allowed programs, units, and faculty to move faster without creating an unsustainable support burden for the LMS team.
Again, the use cases are not identical. But the strategic outcome is. Both institutions used the integration to create a feedback process that is more embedded, more intentional, and more aligned with how their institution actually works.
Another strong takeaway from the webinar was that ownership does not have to look the same to be effective.
At Oregon State University, ownership meant giving faculty and programs the autonomy to deploy their own surveys when and where they need them. That approach supports scale through distributed responsibility. It reflects a reality many institutions know well: central teams cannot and should not do everything.
At Chamberlain, ownership looked more centralised, but no less strategic. Their approach depends on close collaboration between institutional research, Canvas administration, and other stakeholders. Success came not just from the technology itself, but from clarity around roles, responsibilities, and communication.
That is an important point for any institution considering this kind of implementation. There is no single correct governance model.
What matters is whether the model supports your goals while protecting data quality, privacy, and operational sustainability.
The conversation also surfaced something institutions are increasingly focused on: trust.
- Trust in the process.
- Trust in the anonymity model.
- Trust in the data.
- Trust in the technology behind it.
This came through clearly in both stories. Oregon State University stressed the importance of having an integration that was technically straightforward, transparent in its permissions, and flexible in how embedded data fields were controlled. That kind of implementation clarity builds confidence with LMS teams that are rightly cautious about what enters their ecosystem.
Chamberlain, meanwhile, highlighted how improved distribution and embedded data handling increased confidence that the right learner was completing the right survey for the right course. That may sound simple, but it is foundational. If institutions cannot trust the pathway through which responses are collected, they cannot fully trust the conclusions they draw from the data.
In other words, better survey experiences are not only about ease of use. They are about institutional confidence.
If I had to reduce this webinar to one takeaway, it would be this:
The most effective feedback strategies do not ask learners to step outside their learning environment to participate. They meet them where they already are.
That principle applied to both institutions, despite their different use cases.
For Chamberlain, it meant moving a high-volume evaluation process into Canvas to improve accessibility, participation, and reliability.
For Oregon State University, it meant enabling a wide variety of survey-driven teaching and programme needs within Canvas, while preserving flexibility and ownership.
In both cases, the lesson is bigger than surveys. It is about connected digital experiences and reducing friction.
And it is about designing processes that work for the people who actually have to use them, whether that is learners, faculty, institutional researchers, or LMS administrators.
For institutions at the start of this journey, the webinar offered a useful reminder: you do not need to have the same size, structure, or use case as another university to learn from their approach.
Your institution may be centralised or decentralised.
You may be focused on accreditation, course evaluations, in-course data collection, or learner check-ins. You may have a large support team or a very small one.
But the questions remain the same.
- How do you improve response rates without adding friction?
- How do you strengthen data quality without overcomplicating the process?
- How do you support ownership without creating chaos?
And how do you make feedback collection feel like part of the learning experience, rather than something separate from it?
That is where the real commonality lies.
Not in the exact use case, but in the institutional ambition behind it.
What Chamberlain University and Oregon State University demonstrated so clearly is that embedding Qualtrics surveys into Canvas is not just a technical integration.
It is an enabler of better institutional practice. It helps institutions collect better data. It helps distribute ownership in a way that fits their model. And it helps create a more connected experience for learners and staff alike.
The use cases may be different. The operational models may be different. But the outcomes they are moving toward are strikingly similar.
And that is exactly why these conversations matter, Tasha & Kate, thank you so much for sharing your experience and insights. I love these kinds of conversations, and I would highly recommend watching the recording.