Many of us are interested in improving the effectiveness of meetings. No one (well, almost no one) wants to just attend one unproductive meeting after another, have less time available for getting anything of significance done, and be buried under an ever-growing list of action points from back-to-back meetings. That’s not a business recipe for long-term success. If you haven’t seen it already, please check out our whitepaper on Smarter Meetings; it offers our view on the theory and practice of highly effective meetings in the modern age. Despite the technology that’s on offer to improve meetings, it’s the human practices and principles that are essential. However, we still love great meeting technology, and many people buzzed with excitement at the smarter meetings demo during the Build conference last year.
Microsoft showed off its progress with automatic identification of meeting attendees (based on video identification when they entered the room), multi-speaker speech transcription, and auto-identification of action points and commitments made during the meeting. While it may sound like the domain of science fiction, it’s getting ever closer to practical non-fiction experiences in meeting rooms everywhere.
The Build conference for 2019 has just happened, and while there was no further demo of meeting room technology magic, Microsoft did announce several updates. These included:
- Conversation Transcription, a feature of Azure Speech Services, entered preview on May 6. Conversation Transcription offers a real-time transcription of multi-user conversations with automatic speaker attribution, even through cross-talk. In other words, just as demonstrated at Build 2018, you can now hold meetings where the spoken words of each attendee are automatically transcribed and attributed, even when more than one person is talking at the same time.
- Conversation Transcription will help remote meeting attendees be more informed about what is happening in the room (often it is hard to know exactly what that person just said, and in larger meetings, who exactly said it), and when paired with automatic speech translation, will better support cross-cultural teams of people.
- Microsoft’s mysterious black cone from the Build 2018 demo is a reference design for a multiple microphone array. The black cone version also includes video capabilities – which is why it could identify the people entering the room – but the reference design includes options for audio only or audio and video. Microsoft is working with partners to develop products based on these reference designs.
- Creating ways for people to use their current mobile devices and laptops to create a virtual microphone array, without having to rely on a physical one being in the meeting room. With the vast majority of meeting attendees having a device of some kind with them, it will be possible to link all available (and authorised) microphones together to create the same effect. Virtual microphone arrays are still in the research stage but watch this space.
- The ability to complement general-purpose speech and language models with a custom speech model for each organisation. Using a secured and authorized connection to an organisation’s Office 365 tenant, the idea is to analyse for special speech concepts, terminology and people names used across the organisation, in order to better identify and transcribe these words in meetings. Microsoft has released this capability into private preview; you have to apply to take part.
We look forward to seeing Microsoft’s new meeting capabilities at play in our meetings at Silverside. But we won’t forget the deeper human patterns either – a clear purpose, a reason for each person to be there, and a clear sense of where to from here for each meeting.
Download our eBook Architecting Smarter Meetings.