Search refiners are yesterday’s algorithm

A while ago I came to the conclusion that Delve is definitely the future of search in Office 365. Delve is great for the targeted, individual relevant search, connecting people to content and content to people through the Office Graph. I believe it is the future but it is not quite there yet. The missing component from an organisational perspective is the search view and experience that the organisation wants you to have – connecting you to content that it believes is relevant to you. Currently this is possible in Search using items like Promoted Results and Result Blocks? I know it is possible today, if you want to get your coding hands dirty and you implement your own take on the Office Graph, but hands up who wants to build their own version of Delve? It’s way too much work if you ask me!

Ok I hear you say, but what about refiners. Delve does not offer many of them. But then again what do refiners really do… Refiners are yesterday’s algorithm.

A not so deep dive into what is happening behind the scenes when you view a page in SharePoint Online [Part 3]

This is the final part of a three-part post where I present my take on what is happening behind the scenes when you view a page in SharePoint Online. In the first part I discussed the components that are needed to assemble a SharePoint page and how geography has effect. In the second post, I focussed upon the processes that make significant contributions to the performance of a SharePoint page. In this post I’ll continue the focus from part 2 and in particular highlight a feature that will affect choices made in the design of solutions like Intranets in Office 365.

Search powered pages

A common design pattern, especially in Intranets, is to use search parts to render content, for example to serve content that is relevant to the user viewing the page. The calls made by search add a premium to the page load time with the premium divided into two major elements: time to fetch the results and time to render them. Our Intranet home page makes extensive use of search parts and so it has been designed to make a single search call with the results then rendered in different parts in the page. This reduces the time to fetch the results with the process to render them optimised in code. In other areas of the Intranet the pages can contain multiple search parts which in turn will make multiple calls but the performance hit is mitigated as the remainder of the page content is relatively light and simple and therefore the page load times are not unduly impacted.

Works great on premises, fails you in the cloud

The design of our Intranet takes advantage of the SharePoint Rendition Engine to serve images. Renditions work by serving up different versions of images based on pre-defined image dimensions. This is especially useful in areas of the Intranet such as the Home page where a single image could be repeated twice within the page: as a carousel image and as a thumbnail in the carousel. This avoids situations where the full resolution image is downloaded, potentially multiple times, for resizing through JavaScript or CSS. An image rendition is created by SharePoint the first time someone visits the page holding the image. Subsequent requests for the same image, say when another person browses to the page, are much faster as SharePoint simply serves the stored rendition rather than creating and serving it. The SharePoint server responsible for the creation and storage of the rendition is the ‘Web Front End’.

When comparing performance between SharePoint 2013 and SharePoint Online there is a significant difference in the number of ‘Web Front Ends’. The result is that for SharePoint Online there is a delay in the serving of the image rendition. In a SharePoint 2013 environment there is a limited number of ‘Web Front Ends’ and so, in a relatively short period of time, they would have been visited by a large number of people with the result that the caches on each server will hold a complete set of renditions. In SharePoint Online there are many ‘Web Front Ends’, in fact probably too many to be visited a sufficient number of times and so each one will only hold a partial set of renditions. Additionally, for SharePoint Online, Microsoft have had to make some performance choices that result in the caches in each ‘Web Front End’ being very small (so cannot hold many renditions) and they are regularly cleared. This results in a high number of ‘cache misses’ when a rendition is requested in SharePoint Online i.e. it looks for the rendition but one is not available. A ‘cache miss’ forces the ‘Web Front End’ to create and serve a new rendition as if the server was being visited for the first time. This adds a minimum premium of 3 to 4 seconds of page load time. This post by Chris O’Brien goes into more detail and in a follow up post he offers a potential solution (which incidentally we are considering implementing)  .

Wrap up

I hope you have found this mini-series useful. To recap:

My summary from the first post was that we can only influence the Page Content as the reminder is controlled by Microsoft. However, we can make improvements through understanding where the content is served from. These improvements should optimise access to and transmission of content from the local touch points of the Microsoft Content Delivery Network as well as from the datacentre that is hosting the tenant.

In the second post I focussed on the consumer vs corporate user experience as well differences between web browsers. Each one has a major influence on performance or the expectations around performance.

In the final part I focussed upon design choices used in SharePoint Online and in particular SharePoint Renditions which makes a significant negative contribution to the performance of a SharePoint page. The impact of its contribution may affect your design choices say when creating an Intranet in Office 365. I strongly recommend reading the posts by Chris O’Brien for more detail [Posts #1 and #2].

 

A not so deep dive into what is happening behind the scenes when you view a page in SharePoint Online [Part 2]

This is the second of a three-part post where I present my take on what is happening behind the scenes when you view a page in SharePoint Online. Whilst the title describes the focus to be SharePoint Online, I do stray into other areas of Office 365 like Yammer and Exchange. In the first part I discussed the components that are needed to assemble a SharePoint page and how geography has effect. In this post, I focus upon the processes that make significant contributions to the performance of a SharePoint page.

Key influencers

Reviewing the data reveals that time to process an element has a greater influence on the performance of the page rather than the size of the page. For example, the typical file size of an image rendition, used say in a news article, is small, around 20Kb, but the time to process and serve it is long (typically 3s from request to receipt). Conversely the core SharePoint JavaScript file ‘O365ShellG2Plus.js’ is the largest single item, around 862Kb, though it is served very quickly in 0.87s. The speed to serve the file is a result of using the Content Delivery Network to provide the file from a location close to the person viewing the page. The rendition takes longer as it is served from the datacentre used by the Tenant.

Consumer vs Corporate

If your users are like our users, then they have performance expectations based on their personal devices and their own internet connections. These expectations are usually at odds to the performance they get when using a corporate device on a corporate network. This then puts Office 365 in a difficult position – they use it at home on their own iPad and it is blisteringly fast, they use it in the office on their corporate pc and it appears to be infuriatingly slow. You know that they are not comparing apples with apples but it’s a comparison that they’ll make so it needs to be managed as part of the adoption communications.

This can be illustrated using page load data. Ad hoc testing using a Corporate PC and a Consumer PC using the same user account, WiFi connection and similar build reveal a pronounced difference in payloads received:

page load 2

The analysis reveals two key differences:

  1. For the same page the downloads are much larger on the Corporate PC
  2. The download size differs by browser

Point 1 might be a quirk unique to us but I suspect we are not alone. The reason why, for the same page, that the downloads are much larger on the Corporate PC appears to be in how Internet Explorer 11 is handing JavaScript files. As mentioned earlier ‘O365ShellG2Plus.js’ is the largest single file at 862Kb. IE11 on a Corporate PC appears not to respect the compression applied to the file and so it downloads 862Kb. Chrome on the other hand on a Corporate PC respects the file compression and downloads the file in a compressed state and then inflates to 862kB when complete. The deflated file size transferred is 172Kb. IE11 on a non-Corporate PC also respects the compression and transfers 172Kb prior to inflation to 862Kb. Therefore, there must be something that is affecting IE11 – I suspect we have a setting enabled in IE that is not helping and if we’ve had need to set it then others have probably done the same.

I find the 2nd point a source of amusement – why does Chrome perform better than IE, surely Internet Explorer should be better as it’s a Microsoft product just like Office 365? It’s a question that I cannot answer. I cannot help but chuckle that when I visit our Developers and the likes of Yammer, it’s Chrome that I see on their screens but they are developing for Office 365… I’m hoping that Microsoft’s Edge browser will be a vast improvement.

Stay tuned!

My summary from the first post was that we can only influence the Page Content as the reminder is controlled by Microsoft. However, we can make improvements through understanding where the content is served from. These improvements should optimise access to and transmission of content from the local touch points of the Microsoft Content Delivery Network as well as from the datacentre that is hosting the tenant.

In this post focussed on the consumer vs corporate user experience as well differences between web browsers. Each one has a major influence on performance or the expectations around performance.

In the final part I will focus upon design choices used in SharePoint Online and in particular one that makes a significant negative contribution to the performance of a SharePoint page. The impact of its contribution may affect your design choices say when creating an Intranet in Office 365.

 

A not so deep dive into what is happening behind the scenes when you view a page in SharePoint Online [Part 1]

This is the first of a three-part post with my take on what is happening behind the scenes when you view a page in SharePoint Online. Whilst the title describes the focus to be SharePoint Online, I do stray into other areas of Office 365 like Yammer and Exchange. The posts are based upon a similar summary that I produced for our network analysts to help them put their analytics in context and help them focus on the elements that we could influence. I’m not a network specialist so I’ve pitched this at a level that I’m comfortable with and one that hopefully makes the subject a little more accessible for all.

Anatomy of a SharePoint page

When you view a page in Office 365 the elements are coming from across the globe and are assembled in your browser to create the page. This is best illustrated by an example using a fictional Tenant based in Western Europe and assuming your Tenant is based Microsoft’s Dublin datacentre:

Imagine if you where based in Glasgow and viewing a News Article in your Intranet. The article contains a Yammer feed. Then:

  • The Suite Bar, at the top of the page, is being served to you from London (which is your nearest Content Delivery Network point of presence – more about this later). A short hop of 345 miles using IP by Avian Carrier.
  • The notifications that appear under the bell icon in the Suite Bar are being served from Dublin as this is where your email lives. (247 miles)
  • The code that powers the SharePoint page is coming from London.
  • The content of the News Article is coming from Dublin as this is where your content lives.
  • The Yammer feed is being served from Chicago (Yes, that’s right, from over 3,665 miles away!)

If we extended the scenario, and you are now viewing the same page from an office in Adelaide, you can substitute London with Melbourne (a mere 452 miles away). However, the Dublin content will still come from Dublin and Yammer will still come from Chicago. That’s 10,317 miles to Dublin and 9,913 miles to Chicago!

My example is a vast over simplification so it’s worth saying at this point that associated with each element there are a number of transactions like checking your permissions to view the content, making sure you have a valid access token for Office 365 etc. Completing these transactions add to the time it takes the overall page to load.

Office 365 is architected in such a way that common, frequently used elements, like the Suite Bar, are distributed to points around the world for local collection. ‘One-off’ elements like the image in the page are not distributed as it is more efficient to serve them on-demand. Yammer and Sway are exceptions and they only live in one location. In the case of Yammer, the application is very complex and so, at the moment, it only resides in Chicago. Yammer might look basic in terms of functions and features but under the hood there are complex algorithms running that help tailor it you to etc. and this makes it a complex application to replicate in other data centres. You can find out where your data lives using ‘Where’s my data’.

“Ye cannae change the laws of physics”

Clearly, geography has a significant impact upon performance. All Office 365 content is securely transmitted via the Internet and the path it takes is often not the most direct. If your network has a hub and spoke arrangement, then additional distances are added to and from the hub. Undersea cables that carry the vast proportion of all internet traffic, snake their way around the world joining countries and continents and adding miles to the routing. Whilst light travels at ‎over 186,000 miles per second, Scotty often reminded Captain Kirk “Ye cannae change the laws of physics” and so there will always be a time premium the further away you are from the data. Microsoft overcome this to a degree by serving common, frequently used elements via a Content Delivery Network (CDN) for local collection. Items like page content including image renditions are not served via a CDN as it is more efficient to serve them on demand. However, for the Content Delivery Network to be effective the content should be served from a point that is closest to you. If your internet traffic is routed via your company’s regional data centre, say in a hub and spoke model, then sometimes the CDN content is served from a location that is close to the datacentre rather than the CDN end point that is closest to you. Elements like DNS geolocation can affect your email experience as it connects you via a Microsoft data centre that is close to where you actually are and then retrieves your email from your tenant datacentre utilising a fast datacentre to datacentre connection. This subject is covered in more detail in a Microsoft case study entitled ‘Optimizing network performance for Microsoft Office 365’. In the case study they highlight techniques like ‘split tunnelling’ and ‘on-sites edges’ as methods for improving performance. The case study also highlights the need to use the appropriate type of search web part and consider the site navigation methodology.

“Ah but it’s the size of the page…”

Returning to the example at the start it is possible to classify the elements in a typical Intranet page and detail the page payload sizes for the first (with the cached cleared) and subsequent visits (using cached content). It is also possible to identify the elements that you can influence (highlighted in bold) and the ones that you cannot. The figures in the table are real as I’ve taken them from the home page of our Intranet and they include the additional load generated by our analytics solutions Web Tuna and Google Analytics.

page load

[ Source of reference for the CDN details ]

Our contribution to the overall payload is:

  • First visit: 1161.94Kb or 15% of total for visit
    • [1395.75Kb less the Microsoft files of 260.42Kb]
  • Second and subsequent visits 156.729Kb or 79% of total for visit

Of all of the files that make up the Home page it is worth noting that ‘O365ShellG2Plus.js’ is the largest single file at 862Kb and it is produced by Microsoft. This file is one of the key building blocks used by SharePoint. The largest single file created by us is a minified JavaScript file which is 174Kb in size. This is not the largest element of the page content as this is the underlying SharePoint theme file at 260.42Kb which is produced by Microsoft.

Stay tuned!

In the second and third part I focus upon the processes that make significant contributions to the performance of a SharePoint page.

Workflow is a service and not a solution

A couple of posts by Simon Terry, where he describes how algorithms can be part of Working Out Loud and designing workflows for the people, sparked my thinking about how algorithms can reshape the way we work.

Over the last 3 years or so (as that’s how long I’ve been in ‘IT’) I’ve formed a stronger and stronger belief that workflow is a service and not a solution. In this space Microsoft are making some decent headway with PowerApps and Azure Logic Apps. These services allow people to build what they need around them to suit them. There is a level of platform agnostics that elevates these kinds of products to a level of collaborative service that cannot be fulfilled by an in product solution. People can now shape the flow. Leaving the products like Yammer and SharePoint to focus on what they are good at.

I deliberately made the 3 years’ reference as before that I was a Civil Engineer. As an engineer tailoring the workflow to suit both the people and process being undertaken was part of everyday life (sometimes at the collective subconscious level). Granted certain processes had to be conducted in a specific order or things would fail or worse still people would get hurt but often the flow of work would flex, change and adapt based on the people. What was truly amazing, when you stepped back, was how this would (normally) happen without intervention.

What surprised me about IT was that the attitude and solutions for workflow are years behind something as old as the building game. It is still about lining up the dominos and knocking them down in order. Try introducing variables to the workflow and the systems quickly fail, unable to reroute beyond simple branching logic or adapt when iterations are needed. Difficulty is people are, and cause, the variables.

I think elevating workflow to a position outside of the products into a service is the first step. Thereafter the real magic and people focus can be applied. Second step (which we can start to do today) is let people design the flow to match their needs and work style. Yes, the flow will pass through some stage gates (as that is what organisations demand) but the path will be a little less constrained. The third step is to apply some Delve-esque magic. Algorithms are learning how we work, with who, at what times and with what. Using the algorithm, we should be able to generate a work flow that matches the individuals involved: playing to their strengths, sequencing it to pick the right time for the task to be done, using their preferred communication method etc. At that point we will be getting close to construction in terms of a people centric workflow.

 

delve_organization_view

Imagine what could happen if we combine the insight from Delve Analytics with tools like Planner and PowerApps?

It’s not an Oscar but…

Just over 3-years ago I finished my last construction project. The construction market was deep in recession and I was struggling to see where the next opportunity would come from. I faced a choice, one faced by many of my colleagues, the choice to “stick or twist”. Some of my colleagues chose to “twist”, dusting off their CV’s and striking out for roles, typically outside of the industry. My choice was to “stick”, well at least with the same company, as there where opportunities if I was willing to move outside of construction. I took a gamble with a six-month secondment into our internal IT service.

For me, IT was a gamble as all of my formal training is in construction and engineering. I gambled on having some transferable skills coupled with a passing interest in technology. I felt that I offered great value as I could help the service with my experience in how colleagues actually used the technology provided to them. I’ve always had a passing interest in technology and I’ve tried to include it in my construction projects whenever I could – from the early days of installing the foundations to the Reading Room at the British Museum to the digital models used in my last project.

As it happened it took a while for me to understand which of my skills where actually transferrable. I extended my secondment several times before settling in my current role. It’s in this role where I have finally realised which skills are actually transferrable. It’s also this role that’s led to a moment of real pride. In fact, its generated the same satisfaction that I used to get when I walked around a completed construction project.

About 18-months ago, the decision was made to rollout Office 365. My role was to use my business knowledge to shape the solutions that we would provide through Office 365. One thing led to another and I found myself becoming deeply immersed in the product and the solutions. I even started to describe myself as a fledgling Enterprise Architect in my LinkedIn profile. I found skills that I learned from construction, like the ability to take a holistic view and the value of a good specification, helped me assimilate and understand the complexity and depth of Office 365. Similarly, there are parallels between how construction projects run and how IT deliver projects. On a construction site, the team would meet daily to discuss the tasks to be completed. The meetings would not have a name but typically the team would gather around the project plan or a set of drawings and work through the days and weeks ahead focussing upon the items to be built in the period. In IT, Developers do the same thing but call it a ‘daily stand-up’ and describe the process of focusing on the items to be built in a period as ‘Agile’.

In the last 12-months, I’ve become both architect and evangelist. I attribute a lot of my success to my involvement in the Office 365 Network. It’s through this network that I’ve been able to accelerate my learning, deepen my knowledge and crucially make contacts and friends with people who are on a similar journey. The Office 365 Network is built on a product called Yammer and I’ve taken to Yammer like a duck to water. Perhaps there is something in Yammer that appeals to the engineer in me and it certainly clicks with my personality type.

Unbeknownst to me, my contributions to the Office 365 Network had been spotted by staff in Microsoft. I suddenly became aware when a notification landed in my inbox just before Christmas. I had been nominated for consideration as a Microsoft Most Valuable Professional (MVP):

The Microsoft MVP Award is an annual award that recognises exceptional technology community leaders worldwide who actively share their high quality, real world expertise with users and Microsoft. All of us at Microsoft recognise and appreciate Simon’s extraordinary contributions.

I was flabbergasted and humbled by the nomination. To be perfectly honest I expected nothing to come of it. There are only around 4,000 MVP’s in the world and most have worked in the industry for years building up their knowledge, networks and reputation. I felt that I did not fit the mould. However, the emails kept coming from Microsoft and today one landed with the subject:

“Congratulations 2016 Microsoft MVP!”

 

Tonight I’m proud, engineer proud. 🙂

 

Triaging Yammer Embed

Yammer Embed should work but there are times when you could end up chasing your tail trying to understand why it does not work. This is especially true if you are new to using Embed. So I thought I’d put a post together that describes my triaging process when things go wrong.

Start from something that works!

Now I know it seems simplistic but the embed code is *code* and by its very nature sensitive to typo’s. Hence I recommend using the widget to generate code especially if you are new to it.

When triaging problem, I recommend starting with the Yammer Widget as it allows you to test Embed whilst ruling out SharePoint and other gremlins. It’s a relatively straight forward fill in the blanks exercise. I’d start with simply setting the Widget to display your home network. As shown below:

widget example

 

If you cannot get the Widget to work, then it is likely that you have either browser or network related problems. In the case of your browser make sure you are using a supported browser e.g. Internet Explorer 11. If you are using an unsupported browser (refer to the ‘Browser and system requirements’ section), then you can expect a degraded or non-existent Yammer experience. Yammer contains code that identifies the type of browser and modifies the Yammer experience accordingly e.g.

yammer ie9 example

 

 

If it is not the browser and the widget fails to work, then I’d start suspecting either a Service Outage (which you should be aware of as these are posted in the Office 365 Admin Centre) or a configuration issue with your network. Later in this post I describe a couple of methods for confirming the basic configuration of your network.

Use the code from the widget

Assuming that you can get the widget to work, the next step is to reuse the widget code in a SharePoint page. For this you need a page in your environment that is ‘straight out of the box’. By this I mean it does not have additional styling or code as you might find in an Intranet page. The reason for using a basic page is to rule out any issues like JavaScript or CSS clashes. I use the code from widget as it will be error free. A common mistake when hand coding embed script is to miss off a comma and a missing comma can be hard to spot!

In order to perform this test, you’ll need the code from the widget which has been topped and tailed with some additional code to tell SharePoint where to get the Yammer JavaScript file from and that it is code to be executed. If you need a point of reference for this take a look at the basic example at the top of my earlier blog post about using Yammer Embed.

I would conduct this test using a Content Editor Web Part (CEWP) and not a Script Editor Web Part (SEWP). Using a CEWP means that you can vary the code without losing access to the page you are testing it in! Trust me it is super annoying to paste some flaky code into a SEWP for you to lose access to the page and have to roll a version back (if you have versions enabled!) or worse still delete the page and start again.

You should be able to get the basic code to work and from there you can start testing in the target environment e.g. an Intranet page. If that fails, then you have a SharePoint configuration or code security problem.

Check your SharePoint configuration

I’ve only used SharePoint 2013 and SharePoint Online and Embed works fine in both environments. I have read reports from users of SharePoint 2010 that Embed can fail in that environment. In some cases, this is down to SharePoint rendering the page in such a way that the browser switches to a compatibility mode which is lower than the supported mode.

Another potential blocker to Embed working is whether third party code is allowed to execute in your SharePoint environment. At this point it would be worth following my network configuration checks before pointing your finger at your SharePoint Administrator!

Check your network

Network configurations and firewalls can introduce a level of variability and complexity that can seem daunting. However, there are some really simple steps that you can perform prior to involving your network administrator.

I’d start by checking that you can access:

https://c64.assets-yammer.com/assets/platform_embed.js

You can do this test by pasting the address above into the address bar in your browser. The result should be that your browser will attempt to download a file:

yammer code

 

If that fails, then there is either a Browser trust issue or more likely your firewall is getting in the way. In this context I use the term ‘firewall’ quite loosely. The file could be blocked by your firewall, it could be intercepted by your anti-virus application, a proxy service might be getting in the way etcetera etcetera. If the JavaScript code cannot be loaded the Yammer Embed part will not work! It would be odd if you can get the Widget to work but not be able to download the file.

 

Go back to basics

If you’ve read this far and are still struggling to get Embed to work, then perhaps you need to take a big step backwards and get your network administrator to confirm that the basics required to get Yammer to work are also in place. This might seem odd, especially as you’ll no doubt be able to access Yammer through your browser but some networks are tightly nailed down. Microsoft recommend that to successfully use Yammer a number of addresses and rules are allowed through your network and firewalls. Some administrators do not like opening up all of the recommend connections and so it’s worth double checking that they have. As a point of reference the firewall and trust rules are detailed here https://support.office.com/en-us/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#BKMK_Yammer . It’s worth them also checking the SharePoint requirements (listed on the same page several sections above the yammer ones).

As I mentioned earlier it might also be worthwhile talking with your SharePoint Administrator as they might have configured SharePoint to prevent the execution of code etc. If all else fails raise a support ticket with Microsoft.

 

 

Using Yammer Embed in SharePoint Online

One of the use cases for Yammer Embed is to provide a commenting system for SharePoint pages. This can be achieved using the Yammer Embed Open Graph method. It is possible to generate the necessary code for this using the Yammer Widget (which is my preferred way as you can test the options as you go) or code from scratch using the published documentation. An example code snippet is shown below:

<script type="text/javascript" src="https://c64.assets-yammer.com/assets/platform_embed.js"></script>

<div id="embedded-feed" style="height:500px;width:500px;"></div>

<script>
yam.connect.embedFeed({
 container: '#embedded-feed'
 , network: 'your_network_name.com'
 , feedType: 'open-graph'
 , config: {
 header: false
 , footer: false
 , showOpenGraphPreview: false
 , promptText: "Comment on this page"
 , defaultGroupId: 4191007
 }
 });
</script>

I recommend playing with the Yammer Widget to test the configuration options.

Using Yammer SSO and Yammer DSync?

If you are using Yammer SSO and Yammer DSync in your environment then you will need to adjust your code by adding the use_sso: true element to the configuration section. You will also need to be very aware that Yammer SSO and Yammer DSync are being deprecated and will stop working after December 1st, 2016. You will not be able to set up new configurations with Yammer SSO and DSync after April 1st, 2016. An example code snippet is shown below:

<script type="text/javascript" src="https://c64.assets-yammer.com/assets/platform_embed.js"></script>

<div id="embedded-feed" style="height:500px;width:500px;"></div>

<script>
yam.connect.embedFeed({
 container: '#embedded-feed'
 , network: 'your_network_name.com'
 , feedType: 'open-graph'
 , config: {
 use_sso: true
 , header: false
 , footer: false
 , showOpenGraphPreview: false
 , promptText: "Comment on this page"
 , defaultGroupId: 4191007
 },
 "objectProperties": {
 "url": location.href,
 "description" : " Comments feed for page ",
 "title": document.title + " (captured with document.title)",
 "image": "https://mottmac.sharepoint.com/sandpit/Documents/CompassIcon_100.jpg",
 "type": "page",
 }
 });
</script>

Adding the code to SharePoint Online

There are a couple of options for adding the embed code to SharePoint Online and I’ll cover them in detail in another post. If you wish to test out your code, simply paste it into a Script Editor Web Part. The result will look something like this:basic commentsCreating a comment using the feed will send details of the page to Yammer which would then create a specific Yammer page to hold the comments. The result in Yammer will be:

basic page in yammer
In my opinion, the outcome is less than optimal and to understand why we need to go behind the scenes.

What is happening behind the scenes

When you post using Yammer Embed, it attempts to resolve information about the SharePoint page using a behind the scenes Yammer service (which was previously Embed.ly.) It then uses this information to set items like the name of the page in Yammer and the descriptive text. However, this service does not have the ability to resolve metadata from a page that requires authentication to access, is located behind a firewall etc. This inability includes pages in SharePoint Online as they are located behind an exterior ‘Sign in to Office 365’ sign-in page. When the service attempts to resolve information, the ‘Sign in to Office 365’ page gets in the way. The result is the incomplete capture of information and pages in Yammer sharing the Title ‘Sign in to Office 365’ and Descriptive text ‘It looks like you are on a slow connection. We’ve disabled some images to speed things up…’. Each page is unique as it uses the URL from the SharePoint page but the experience is sub-optimal, for example, searching for a particular page will result in many identical search results.

A solution

The solution is to include the information Yammer needs to help the service resolve the page information. This is achieved using objectProperties in the Embed code. Unfortunately, you cannot use the Yammer Widget for this – though I hope in the future they can provide some common examples. In the case of SharePoint, you will also need to know some SharePoint properties so you may need to enlist the help of your friendly developer. The method is to add a section to the Embed code that provides the Open Graph metadata for consumption by Yammer (which in turn it will use to create the page). An example code snippet is shown below:

<script type="text/javascript" src="https://c64.assets-yammer.com/assets/platform_embed.js"></script>

<div id="embedded-feed" style="height:500px;width:500px;"></div>

<script>
yam.connect.embedFeed({
 container: '#embedded-feed'
 , network: 'your_network_name.com'
 , feedType: 'open-graph'
 , config: {
 header: false
 , footer: false
 , showOpenGraphPreview: false
 , promptText: "Comment on this page"
 , defaultGroupId: 4191007
 },
 "objectProperties": {
 "url": location.href,
 "description" : " Comments feed for page ",
 "title": document.title + " (captured with document.title)",
 "image": "https://your_sharepoint.com/sandpit/Documents/CompassIcon_100.jpg",
 "type": "page",
 }
 });
</script>

Note the comma that has been added at the end of Line#16. The code will fail if the comma is omitted. The result in SharePoint will look something like this:

feed in SPO after

With the corresponding page in Yammer:

feed in yammer

What the code is doing

The purpose of each item in the snippet is as follows:
[Line #18] “url”: location.href this sets the URL property used in the Yammer page to that of the Page in SharePoint. (location.href is a SharePoint property)
[Line #19] “description” : ” Comments feed for page “ this sets the description text that will appear on the Yammer page. In this example I have used text but you can use a SharePoint page property, for example if you are using a Byline property in the page or use a combination of property and text (see next item for an example).
[Line #20] “title”: document.title + ” (captured with document.title)” This sets the Yammer page title to that of the SharePoint page and appends an additional snippet of text to the title. A potential gotcha with this is if you change the Title of the SharePoint page after it is created. The Title field will use the original name unless you edit it. (I’ll cover changing this in another post.) As with other items you could use a different SharePoint page property to populate this item. (document.title is a SharePoint property)
[Line #21] “image”: https://your_sharepoint.com/sandpit/Documents/CompassIcon_100.jpg This sets the image shown in the page header of the Yammer page (very much like a Group icon). In this example I have set it to use our Intranet icon.
[Line #22] [ “type”: “page” ] This tells Yammer that the object is a page. If you were using this solution say for Office 365 Video you might set this to “video”.

Wrap up

And there you have it, a way of using Yammer Embed in SharePoint Online. In future posts I’ll cover methods of managing instances of Yammer Embed in SharePoint Online as well as items like renaming of SharePoint pages.

How we monitor and react to change in Office 365

evergreen_shipping_liner_insight_the_loadstar_600_400

Let’s start with an analogy

It’s one I use when explaining how change occurs in Office 365. It might need some polish!

The super-freighter in the image moves deceptively quickly and it will cross the Atlantic in around 10 days. Whilst though it is slow to start, it takes a long time to change direction and often needs help doing so.

The vessel runs on a schedule and its movements within controlled waters are published so all mariners are aware

When out at sea, normal waves have no impact upon it and radar is constantly scanning so icebergs and other debris are foreseen and avoided.

The containers it carries change regularly and very occasionally a “reefer” falls off and is lost at sea.

Now consider Office 365 to be the super-freighter, the containers a mix of product features.

The service is evergreen – that is Microsoft updates it on a monthly cycle. Some of their Roadmap is published (schedule), some of the changes are communicated (published shipping movements), some of the changes are just imposed (containers fall overboard), new features are added whilst others are taken away (containers are loaded and others unloaded.)

As a consumer, the bulk of this is out of our control. In addition, A/B testing can temporarily add / remove functionality for specific users (or in freight terms, a container gets temporarily misplaced).

How do we monitor and react to change

To stay agile and proactive, we have established our Ships Bridge in yammer and have created an O365 Change and Strategy Group. Members of the group are encouraged to #workoutloud in order to share their thoughts, observations, alerts as quickly and as honestly as possible. The group contains key stakeholders from across the IT service and we can invite others in as the need arises. We monitor and react to change by:

1. Identifying the potential changes

For this we have a network of inquisitive radar operators. We use a combination of:

  • the Microsoft Monthly Service Update ‘newsletter’ (which is issued by our Microsoft Technical Account Manager)
  • weekly reviews of the Message Centre
  • conversations with companies that we are working with
  • a near constant eye on the Office 365 Yammer Network
  • the Office 365 Roadmap (which some of us consume using a RSS feed)
  • plus some other sources that we’d not like to reveal

All of the knowledge is discussed in the dedicated Yammer Group and when we know the MSG ID (from Message Centre postings) we tag the conversations with it (as well as adding links to threads on the O365N etc.). We also maintain a tracking spreadsheet (as you’ve got to have a spreadsheet). To be honest it is too much like hard work and Yammer and Microsoft should make this easier.

2. Discuss the potential changes

We have an allocated slot in our weekly team meetings to discuss roadmap items and how we intend to tackle them. Significant items are reviewed with projects and/or change tasks created.

The tricky part is not knowing exactly when the change will arrive. I usually take actions to try and get more information. For example, Yammer announcements are usually pretty light on actual detail. What really helps is the Change Alerts group in the O365N as customers typically in North America get the change days or weeks before it arrives in our UK tenants. By following that group, we get an early warning of impact, mitigation and customers we can talk to.

3. Test the change

Fingers crossed it arrives in our First Release* tenant. At that point we play and test it, grab screenshots and prepare the communications for it. Every Office 365 feature has a dedicated space in our Intranet. We typically prepare a new page that describes the feature and briefly explains how staff might use it.

An exception to the rule is Yammer as from this day to the next and one user to another we sometimes do not know what is an A/B and what is a feature release as they seldom use the Message Centre (though I understand plans are evolving to improve that).

4. Communicate the change

Obviously the greater the impact the more we do. Massive change means staff briefings, board meetings etc. The bulk of changes are communicated using our Intranet – we use the page created in step 3 as the news item. We also push news out across yammer – again with links back to the article. Sometimes a targeted email blast is employed and we have a dedicated service alerts banner that we can enable in our intranet. In time we will start to surface the service announcements from the Office 365 Admin Centre within our Intranet using the Office 365 Management API.

 

 

* As an aside, we operate a second tenant with a small amount of content in full First Release mode and our Production tenant has Selective First Release for named users – sometimes the change is hard to test as we do not have enough content or numbers of users in our test tenant – we are looking at using ShareGate to snapshot our production instance and replicate the content in test, though this will not replicate the number of users which is vital for testing items like people search and profiles