Quantcast
Channel: TechNet Blogs
Viewing all 36188 articles
Browse latest View live

No Software Assurance? Must-Dos after Feature Pack Installation

$
0
0

Microsoft just released Feature Pack 2 for SharePoint 2016 using the Public Update Channel. Feature Packs include new functionality for On Premises SharePoint Farms and try to limit feature gaps to SharePoint Online.

Great improvements have been published through Feature Pack 1 and 2:

  • SharePoint Framework (SPFx)
  • Administrative Actions Logging
  • MinRole enhancements
  • SharePoint Custom Tiles
  • Hybrid Auditing (preview)
  • Hybrid Taxonomy
  • OneDrive API for SharePoint on-premises
  • OneDrive for Business modern experience (available to Software Assurance customers)

 

The License Trap

The last point in the list above should catch your explicit attention: OneDrive for Business modern experience can only be used by Software Assurance customers,

By installing Feature Pack 1 or 2, OneDrive for Business modern experience is turned on by default even though this feature requires a valid Software Assurance.

Make sure you meet this license requirement. If this is not the case you have to disable OneDrive for Business modern experience. This is an easy task, but this one has to be completed right after the installation of Feature Pack 1 or 2.

See how it works: Configure the OneDrive for Business modern user experience.

 

Ressources

 

Final Note

I hope this article helps you. Please share your thoughts & experiences in the comments. Any feedback or additions for this post is highly welcome.

Thank you for rating this article if you liked it or it was helpful for you.
 


Exchange 2013 / Exchange 2016–a quick note about Server FQDN aka NLB fqdn

$
0
0

 

While planning or deploying Exchange 2013/2016, don’t forget these to have the correct Load Balancing on your Exchange 2013/2016 servers:

 

Assuming your Exchange 2013/2016 namespace is mail.contoso.com, which points to your Load Balancer’s IP address.

Set Outlook Anywhere Internal and External URLs to mail.contoso.com

https://practical365.com/exchange-server/exchange-2013-client-access-server-high-availability/

 

Use the following:

Get-OutlookAnywhere | Set-OutlookAnywhere -InternalHostname mail.contoso.com -InternalClientsRequireSsl $true

Note that Paul Cunningham gives an example using –InternalClientsRequireSsl $false, which is good as well since internally we usually use Kerberos which encrypts per-se critical authentication data flow within the http flow – but on my case, clients were on a different network zone, which had only port HTTPS (443) opened from clients to the E2013/2016 servers zone, hence the -InternalClientsRequireSsl $true on my case.

 

and of course, also ensure that mail.contoso.com is resolved – you can either use nslookup or this neat Powershell command:

Resolve-DnsName mail.contoso.com

Star Thanks Paul Cunningham for these, I discovered “Resolve-DnsName” cmdlet Smile

 

- Note about the RPCClientAccessServer property

Althought present, that property is not used anymore in Exchange 2013/2016 database objects – in Exchange 2010 and back, it was used for autodiscover to give the RPC endpoint fqdn (NLB usually) to Outlook clients – remember Exchange 2013/2016 use HTTPS only now for Client <-> Server communications …

 

That’s just a quick note, thanks for forgiving me for the drafty formatting here

cheers

Sam

Extending the Microsoft Office Bounty Program

$
0
0

Microsoft announces the extension of the Microsoft Office Bounty Program through December 31, 2017.  This extension is retroactive for any cases submitted during the interim.

The engagement we have had with the security community has been great and we are looking to continue that collaboration on the Office Insider Builds on Windows.  This program represents a great chance to identify vulnerabilities prior to broad distribution.

Program Details

Office Insider Builds give users early access to the latest Office capabilities and security innovation. By testing against these early builds, issues can potentially be found prior to production release. This helps improve quality and protect customers.

How it works

  • Types of vulnerabilities awarded and their details are listed in the Microsoft Office Insider Builds on Windows Bounty Program Terms, including:
    • Elevation of privilege via Office Protected View
    • Macro execution by bypassing security policies to block macros
    • Code execution by bypassing Outlook automatic attachment block policies
  • The program duration is from March 15 to December 31, 2017
  • Bounty payout ranges during this period will be $6,000 to $15,000 USD

Call to action: send your vulnerabilities to secure@microsoft.com and let us know that you want your submission to be part of this program!

As always, the most up-to-date information about the Microsoft Bounty Programs can be found at https://aka.ms/BugBounty and in the associated terms and FAQs.

 

Phillip Misner,

Principal Security Group Manager

Microsoft Security Response Center

Test Post

Its been awhile

$
0
0

It certainly has been awhile with so many life and technology changes its time to resurrect this blog from the dead and start contributing once again.  This post will be brief but wanted to set the stage that I will be starting from square one on a lot of topics and technologies just incase anyone needs or wants to start learning about Microsoft Technologies that I am passionate about they will have the option to learn the basics again like installation and configuration as well as more advanced topics in the deep dive sessions of troubleshooting and some of the learnings going from customer to customer that can be shared.

Thanks,
Brandon

 

Smart Card Logon Enforcement – Long Edition!

$
0
0

Hey everyone - it's been a while since the last post with so much going on in IT; the boom of the cloud, security breaches/incidents, new products, etc.  Wanted to take a little time to talk about something that hit my desk again recently and most folks in the DoD and other spaces wind up doing at some point,  and usually various iterations of it across multiple networks.  You may be looking to give it a go for the first time or maybe you're looking at making some changes in how you skin this cat?  There are some choices and things to be aware of when you start considering how you process enforcement of smart card logon (SCL).  First I'll start out by talking about how you can go about requiring that a smart card be used for "interactive logon," which is logon type 2.  Logon types are important when you start talking about SCL enforcement and enablement which are two distinctly different things, more on that later.  SCL enforcement can be accomplished in one of two ways, by the individual machine via local security policy, or configured per user.

 

COA 1 - Enforce SCL using a local security policy setting

This option is enforced by group policy application to the machine, AKA "machine based enforcement."  The setting "Interactive logon: Require smart card" is exposed in an ADMX template:

Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesSecurity Options

 

Make it so!

Ok, we applied the policy.  First up is management, once this policy is applied to a workstation no one will be able to login to it with anything but a smart card.  Ever.  Until you remove the policy setting.  It makes for a high touch scenario where you are constantly changing the application of this setting by moving workstations in the directory or filtering the GPO on them somehow.  With the constant flux of new users with no smartcard,  existing users forgetting their smart card at home, or worse yet losing the smart card; this can make for a cumbersome job for IT Pros to manage.  Tracking what machines should be enforced, what machines shouldn't, and when they should be reverted one way or the other can turn into quite the affair.  Also consider the scenario of users moving around the building or campus, if I lost my smart card, dog ate it, or whatever the case may be I can't just login to any workstation without a call to the help desk or IT shop.  Something else to add to what I'm sure is a long list of daily activities.  : (

How are we doing?

Then there's reporting, how do we report to information assurance managers how many users are actually SCL enforced?  Well the simple answer is, you can't - because you're enforcing machines not users.  Sure you can scrape DC security logs (oh boy, lets hope you have some type of audit collection services) and get an idea of how many smart card logons are happening, but this too can become inconsistent with so many variables and scenarios to account for.  Potentially what you end up reporting is the number of machines that are enforced, the number that aren't, how many smart card logons are happening across all DCs in the domain, and how many un/pw logons are happening.  It would be quite interesting trying to get an accurate count of how many people "are required" to use a smart card for interactive logon with all of this data.  Not so much fun, I'd much rather be playing Halo 5.

That pesky password again!

Let's not forget about the subject of  password management either.  If you aren't enforcing SCL on the user account control attribute on the user account object in AD, then passwords can still be used.  As long as a user knows their current password, they can continue to change it.  They can also continue to use it on machines that do not have the require smart card policy enforced.  In fact they can also continue to use the password on machines where smart card logon is required for interactive logon (type 2) for authenticating to network resources, which are "logon type 3."  So what you can potentially end up with is a hybrid mesh of users and computers in various states, some users may have their password reset because they forgot their smart card, then they can continue to change it and use it even after they have their machine "re-enforced" to require smart card logon.  They can still use the password on machines that aren't SCL enforced (if they find one), and they can still use the password for Network logons, which are "Type 3" logons.

Thoughts?

For these reasons this is my least favorite of the two ways to accomplish SCL enforcement.  It's not very flexible, agile, or hands off from an administrative perspective.  I'm not saying enforcing SCL this way doesn't have its places, for instance maybe a medium or high assurance kiosk type workstation such as a local registration authority machine, or other sensitive kiosk type.  But generally speaking for most cases I tend to recommend against this method.

 

COA 2 - Enforce SCL on the AD User Account Object

Alternatively smart card logon can also be enforced on a per-user basis by modifying the "Smart Card Required for Interactive Logon" (AKA SCRIL) user account control flag on the AD user object.  The user account control attribute is a single user account object attribute that is composed of bitmask flags.   This attribute value can be a combination of one or more of 24 possible values.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The flag we're interested in is ADS_UF_SMARTCARD_REQUIRED which has a value of 262144.  We can programmatically test for this flag (a number of ways) to very quickly determine if a user is SCL enforced or not, and based on this information perform some action.  The flag corresponds to this checkbox in the UI:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

It's important to understand what happens when you toggle this flag on with a bitwise operation.  It sets the password to a high entropy 120 character random value, so that the user does not know the password.  This password hash can be scrambled at will, by - you guessed it flipping the flag off and then flipping it back on, this is commonly referred to as "Rotating the password hash."  As you may or may not know password hashes for SCL enforced accounts should be rotated at an interval since the hashes never technically expire and can be stolen and used for logons other than type 2 (interactive), of which logon type 3 (network) is probably the scariest and likely one of (if not the) the most widely used logon types by an attacker when they're traversing network endpoints.  There is also something here that some folks are not aware of.  After you've toggled the flag on, SCL enforced the account and thereby scrambled the password - an administrator can in fact reset the users password for them and they can use it for network logons (Type 3)!  This is where the terms "SCL Enablement," and SCL Enforcement" come into play.  There are really 3 configurations a user can potentially be in when using smart cards:

  1. SCL Enabled (here we are only talking about implicit UPN mapping)
    1. Subject Alternative Name - Principal name on the smart card certificate is implicitly mapped to an AD user objects UPN attribute
  2. SCL Enforced
    1. Subject Alternative Name - Principal name on the smart card certificate is implicitly mapped to an AD user objects UPN attribute
    2. SCRIL flag ON
  3. SCL Enforced and has/knows password
    1. Subject Alternative Name - Principal name on the smart card certificate is implicitly mapped to an AD user objects UPN attribute
    2. SCRIL flag ON
    3. Admin resets password, user knows and can maintain password (pw can be used for type 3 logons)

The ideal place to be from a security perspective is number 2, with the hash being scrambled at an interval.  Number 3 could happen (I've seen it) in an environment if there is a non-smart card aware application or service that a user must be able to logon to remotely (type 3) AND they must be SCL enforced.  This scenario doesn't happen often, but it can happen.  Number 1 is exactly that, you are enabled for smart card logon you can use it or not use it.  Scenarios 1 and 3 are considered less desirable due to the fact that the user has/knows a password and therefore a hash that can potentially be a victim of credential theft.

Make it so, but faster, stronger, leaner!

So we can toggle this flag to SCL enforce users, awesome.  There are a bunch of ways we can do this in both vbscript and PowerShell.  Hopefully everyone is using PowerShell these days since its the administrative tool of choice, so we'll go with that.  We can use some code to perform pretty basic stuff, like the following:

  1. Query the directory for users who are NOT SCRIL enforced
  2. Enforce them

But we likely want to do something like the following:

  1. Query the directory for users
  2. Check if they are SCRIL or Not
    1. If they are NOT SCRIL
      1. Check if they are an exception and if so Skip
      2. If they are not an exception toggle the SCRIL flag on
    2. If they are already SCRIL
      1. Toggle SCRIL flag off, then toggle back ON to scramble hash

You'll probably want to use some attribute on the user object to key off of to tell if a user is an exception to smart card logon enforcement or not.  Group membership in an exception group seems to be the most widely used and easiest to implement.  Identify your exception groups, write your PowerShell code and schedule it to run at an interval.  I like the idea of running it nightly for a few reasons.  First being your password hashes on already SCL enforced accounts will be scrambled nightly thus only giving those hashes a glorious 24 hour life.  The second is for ease of un-enforcement.  Consider the following scenario:

  1. User forgets smart card at home.
  2. User calls help desk, account gets un-enforced and password gets reset.
  3. User works all day, then goes home.
  4. That night the scheduled enforcement task runs and because the user is not a member of an exception group they are again enforced.

Hopefully they remember their smart card the next day, otherwise rinse and repeat.  Users should only be placed in SCL exception security groups for long term exceptions.  You are not managing/filtering GPOs, moving machines, hunting for an un-enforced machine to logon to, etc.  You simply un-enforce the account and reset the password for the user, the nightly job takes care of the rest.  The user also has the flexibility to roam the building/campus and logon at any workstation.  You can also choose to rotate the password hashes or not of already enforced users during the nightly job.

Oh and don't forget to use a least privileged service account for the identity that the script runs under, in the directory all you need is an account that has Read Account Restrictions, and Write Account restrictions against the user objects it needs to process delegated to it.  ; )  On the server you run it from you need the logon as a batch user rights assignment for the identity.


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Here is a small snip of PowerShell that goes over this basic concept.  You'll want to add all of the standard stuff like logging, error handling, modify your queries, modify/add your exception groups, etc.  You get the idea!

 

Import-Module ActiveDirectory

#query AD for user accounts using the following LDAP filter (All user accounts in the Department users OU)
$OUQuery1 = Get-ADObject -LDAPFilter "(&(ObjectCategory=Person)(ObjectClass=User))" -Properties distinguishedName,useraccountcontrol -server localhost:389 -searchBase "ou=department users,dc=proseware,dc=com" -searchScope subtree

function Enforce-Users($Users)
{
    #here we test the $users variable to ensure it isn't NULL"
    if($users)
    {
        #do work here, iterate through the $users variable (Get-ADObject query return set)
        foreach($user in $users)
        {
            $exception = "False"
            #test for a clo exception group, only searches one level - not recursively, if you want a recursive dig add your code here
            $userGroupMemberships = Get-ADPrincipalGroupMembership -server localhost:389 -identity $user.distinguishedName
            foreach($groupmembership in $userGroupMemberships)
            {
                if($groupmembership -like "*PROAZURE_CLO_Exceptions*")
                {
                    Write-Host "Not Enforced: " $user.name
                    $exception = "True"
                }
            }
            if($exception -eq "False")
            {
                #not an exception, enforce smart card required for interactive logon
                $UACValue = $user.userAccountControl
                #check account to see if it is currently enforced
                if($UACValue -band 262144)
                {
                    #Account is currently enforced, unenforce and re-enforce to scramble password
                    $user.userAccountControl = $user.userAccountControl -bxor 262144
                    Set-ADObject -Instance $user
                    $user.userAccountControl = $user.userAccountControl -bor 262144
                    Set-ADObject -Instance $user
                    Write-Host "Enforced: " $user.name
                }
                else
                {
                    #Account is NOT currently enforced, enforce account
                    $user.userAccountControl = $user.userAccountControl -bor 262144
                    Set-ADObject -Instance $user
                    write-host "Enforced: " $user.name
                }
            }
        }
    }
}

Enforce-Users($OUQuery1)

How are we doing now?

How many users are SCL enforced?  Lets see, its right at your fingertips with some LDAP filters; no admin account needed.  Scope your queries to your specific OUs and you'll have the information quickly and easily.  Return any attributes you like, export to CSV, dump to text file, etc.  Tons of options here.  Create a graphical report using PowerBI maybe!?

To return all user account objects that are SCL enforced:

dsquery * -filter "(&(ObjectCategory=Person)(ObjectClass=User)(userAccountControl:1.2.840.113556.1.4.803:=262144))" name userPrincipalName distinguishedName

To return all users account objects that are NOT SCL enforced:

dsquery * -filter "(&(ObjectCategory=Person)(ObjectClass=User)(!userAccountControl:1.2.840.113556.1.4.803:=262144))" name userPrincipalName distinguishedName

As you can see reporting is fairly easy and super flexible, you can query the directory with dsquery, vbscript, PowerShell, the MMC snap-in, etc!

That pesky password? Gone!

It's a constant game of cat and mouse to attempt to eliminate the use of passwords altogether.  But hey if we are smart card enforcing users and rotating their password hashes at an (hopefully short) interval then we are in good shape!  Audit your enforcement exception security groups regularly - the less users that have/know their password the better.

Thoughts?

Utilizing a PowerShell script to perform the workload of processing user accounts for enforcement allows for completely automated execution of the task, granular targeting by organizational unit, and expected consistency of results.  For ease of administration, reporting, consistency, and greater flexibility the user account control (UAC) attribute on AD user account objects is the recommended way to smart card enforce users (in most cases).

Happy Friday!

Jesse

Updated Win32_SystemEnclosure Chassis Types

$
0
0

The Win32_SystemEnclosure WMI Class has been rather valuable over the years in order to properly detect and target specific software, drivers etc. to a particular type of system.  The problem is that while it has been a staple over the years it has had many revisions.  These revisions are part of the DMTF Specification in the SMBios, and within the recent few years we have seen changes which required new system chassis types to identify these hybrid devices.

If you look at the Class on MSDN linked above you'll see that the ChassisTypes Property has not been updated to reflect newer options since the original DMTF Specification 2.3.0 more than a decade ago.  Since that time 12 new chassis types have been introduced.  Technologies like MDT that still leverage this class will need to be updated with the below list if they are being utilized in your environment so that your deployments continue to work as these new breed of devices start to transition themselves into your fleet or supported models.  This applies to any scripts or Group Policies as well that require this level of targeting.  Hopefully this list helps someone save a few minutes of their day.

As a side note there are other ways to test for these conditions by checking Win32_Battery or by using Win32_ComputerSystem and the PCSystemType Properties but all have there own downsides i am curious to here what has been most reliable for you all!

Below is the current list as of version 3.1.1 posted 1/13/2017.

 

Other (1)

Unknown (2)
Desktop (3)
Low Profile Desktop (4)
Pizza Box (5)
Mini Tower (6)
Tower (7)
Portable (8)
Laptop (9)
Notebook (10)
Hand Held (11)
Docking Station (12)
All in One (13)
Sub Notebook (14)
Space-Saving (15)
Lunch Box (16)
Main System Chassis (17)
Expansion Chassis (18)
SubChassis (19)
Bus Expansion Chassis (20)
Peripheral Chassis (21)
RAID Chassis (22)
Rack Mount Chassis (23)
Sealed-case PC (24)
Multi-system chassis (25)
Compact PCI (26)
Advanced TCA (27)
Blade (28)
Blade Enclosure (29)
Tablet (30)
Convertible (31)
Detachable (32)
IoT Gateway (33)
Embedded PC (34)
Mini PC (35)
Stick PC (36)

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

PowerShell – Get-Credential and certificates

$
0
0

Matthew Bongiovi  had a discussion on Get-Credential and how it works. So useful, that I thought I'd cut/paste it here so that you and I can refer to it in future! See below:

 

The Get-Credential cmdlet generates the prompt using the CredUIPromptForCredentials function. The documentation for that function says:

“In the case of credentials other than UserName/Password, a marshaled format of the credential can be passed in. This string is created by calling CredMarshalCredential.”

For me, this is actually exactly what I want. However, for someone else looking to then decode that UserName string in the PSCredential, they can reverse the marshalling of that string into its struct, which is the CERT_CREDENTIAL_INFO struct. From that, they could collect the SHA-1 hash of the certificate.

Thanks,

Matt


Recently Published KB articles and Support Content 9-15-2017

$
0
0

We have recently published or updated the following support content for Configuration Manager.

How-To or Troubleshooting

10082 Troubleshooting PXE boot issues in Configuration Manager

  • Online Troubleshooting Guide that helps administrators diagnose and resolve PXE boot failures in System Center 2012 Configuration Manager (ConfigMgr 2012 or ConfigMgr 2012 R2) and later versions. Read More https://support.microsoft.com/help/10082.

4040243 How to enable TLS 1.2 for Configuration Manager

  • This article describes how to enable TLS 1.2 for Microsoft System Center Configuration Manager. This description includes individual components, update requirements for commonly-used Configuration manager features, and high-level troubleshooting information for common problems.  Read More https://support.microsoft.com/help/4040243/.

Issue Resolution

4037828 Summary of changes in System Center Configuration Manager current branch, version 1706

  • Release version 1706 of System Center Configuration Manager Current Branch contains many changes to help you avoid issues and many feature improvements. The "Issues that are fixed" list is not inclusive of all changes. Instead, it highlights the changes that the product development team believes are the most relevant to the broad customer base for Configuration Manager. Read More https://support.microsoft.com/help/4037828.

4036267 Update 2 for System Center Configuration Manager version 1706, first wave

  • An update is available to administrators who opted in through a PowerShell script to the first wave (early update ring) deployment for System Center Configuration Manager current branch, version 1706. You can access the update in the Updates and Servicing node of the Configuration Manager console. This update addresses important late-breaking issues that were resolved after version 1706 became available globally. Read more https://support.microsoft.com/help/4036267.

4039380 Update for System Center Configuration Manager version 1706, first wave

  • This update address important issues in the first wave (early update ring) deployment for Microsoft System Center Configuration Manager current branch, version 1706.This update is no longer available and has been replaced by update KB 4036267. Read more https://support.microsoft.com/help/4039380.

4041012 1702 clients do not get software updates from Configuration Manager

  • After installing Configuration Manager version 1702, newly installed clients are unable to get updates from the Software Update Point. This can also occur if the Software Update Point is moved to a different server after installation of version 1702.  Read More https://support.microsoft.com/help/4041012.

4019125 FIX: System Center Configuration Manager replication process by using BCP APIs fails when there is a large value in an XML column. Read More https://support.microsoft.com/help/4019125.

4038659 Existing computer records are not updated when new information is imported in System Center Configuration Manager version 1702

  • When new information for an existing computer is imported, either through the Configuration Manager console or the ImportMachineEntry method, a new record is created for that computer. This causes changes to the existing collection membership, discovery properties, and task sequence variables for that computer. Read More https://support.microsoft.com/help/4038659.

Cookie Persistence in SharePoint Online

$
0
0

Introduction

Certain legacy features in SharePoint Online — Explorer view, for example — leverage legacy technologies like Windows WebDAV.  WebDAV makes use of the browser's authentication cookie.  Because of security concerns, WebDAV cannot access session cookies; only cookies that are written to disk are accessible by WebDAV. This means that in order for WebDAV to access the authentication cookie, the cookie needs to be persistent (persistent cookies are written to disk)*.

The easiest way to ensure cookie persistence is to check the "Keep Me Signed In" box on the Office 365 Home Realm Discovery Page before entering your username and password. However, some customers who have Auto-Acceleration enabled in their tenancies will not be presented with a home realm discovery page. Other administrators may wish to issue persistent cookies across the organization, or a subset of the organization, and not have to bother with user education or extra steps in order to streamline user workloads and ensure capabilities. Typically, these Administrators have federated identity providers (such as ADFS.)  If you have ADFS using WS-Fed Federation with Office 365, you can direct Azure Active Directory to issue a persistent cookie by including certain claims rules in your Relying Party Trust**.

Establishing the Claim

Previously, if your ADFS Properties were set to allow the Keep Me Signed In Enabled and Persistent SSO Enabled attributes (Set-Adfsproperties -KmsiEnabled and -PersistentSSOEnabled, respectively) then the Persistent SSO claim was present in the claims pipeline, and administrators could pass that claim through to the service. However, a fix was recently (August 2017) released that only issues the PSSO claim on the pipeline if the KMSI option is checked at the identity provider's Forms Based Authentication page. Therefore, customers may have noticed that they are no longer being issued persistent cookies in SharePoint Online.  To work around this fix, ADFS Administrators can add or edit their issuance claims rules for the Microsoft Office 365 Identity Platform relying party trust to include the PSSO claim as follows:

  • If an Administrator wants to issue a persistent single sign on token for all users, he can simply issue the claim with the following language:
=> issue(Type = "http://schemas.microsoft.com/2014/03/psso", Value = "true");
  • If an Administrator wants to issue the claim only for users that authenticate from inside the corporate network, they would use two claims. First would be to pass through the InsideCorporateNetwork claim, and then they would issue a PSSO claim based on the value of the InsideCorporateNetwork claim, as follows:
c:[Type == "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork"]

=> issue(claim = c);

 

c:[Type == "http://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", Value == "true"]

=> issue(Type = "http://schemas.microsoft.com/2014/03/psso", Value = "true");
  • If an Administrator wants to only issue claims to a subset of users, such as based on group membership, they must find the SID of the group they wish to use to issue the claim, and then use the following claims rule language:
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value == "[SID OF GROUP]", Issuer == "AD AUTHORITY"]

=> issue(Type = "http://schemas.microsoft.com/2014/03/psso", Value = "True", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, ValueType = c.ValueType);

Editing Claims in ADFS

Your organization's administrator for Active Directory Federation Services will know how to access the Office 365 Relying Party Trust; however, for completeness, here is how one would implement the claims rules above.

In ADFS, open the ADFS Management Console (In Server Manager > Tools > ADFS Management)

ADFS Management Screen Shot

.

In the left hand navigation pane of the ADFS Management Console select ADFS > Trust Relationships > Relying Party Trusts. There you will see the trusts that have been configured. By default, the Office 365 Relying Party Trust Display Name is "Microsoft Office O365 Identity Platform" and the Identifier is "urn:federation:MicrosoftOnline"

Relying Party Trusts Screenshot

Right click the Microsoft Office O365 Identity Platform and select Edit Claims Rules.  The rules above are Issuance Transform Rules.  You can add them by clicking the Add rule and choosing the ‘Send Claims Using a Custom Rule Option:

Custom Claim Rule Dialog Screenshot

Here is an example of a simple issue of the PSSO Claim:

Custom Claims Rules Language example

Click Finish, and the Claims Rule will be added to the pipeline. Please note that order matters; if you decide to issue the PSSO claim based on another claim (such as the InsideCorporateNetwork Claim, above), that claims rule must be present before the PSSO claim.

 

Notes on Persistent Cookies

*: Administrators should use caution when deciding if and how to issue these cookies. Currently, there is no check if a specific setting has changed; so long as the user has a valid FedAuth cookie, they are allowed access to SharePoint resources and do not have to re-authenticate. For example, someone who issues a persistent cookie based on a machine being inside the corporate network (Example above), who then takes their laptop down the street and access SPO from the corner coffee shop, will still be able to access SharePoint Resources.

It is up to the administrator to understand and manage the kinds of risks using these cookies can incur.

**: As of right now, AAD does not support SAML based use of the Persistent Single Sign On Claim / SAML attribute.

[無料ダウンロード] Office 365 を使用してチームのモバイル生産性を向上させる【9/16 更新】

$
0
0

 

Office 365 では、モバイルワーカーが場所に関係なく、データにアクセスして共同作業を行うことができます。中小企業向けのさまざまな操作をご確認ください。

  • モバイルの作業にサポートを提供し、ストレスを軽減して生産性を向上させる
  • クラウドにデータを保存して、データのセキュリティを保護する負担を取り除く
  • デバイス間の情報を同期して、チームが迅速に応答できるようにする

 

e-Book の無料ダウンロードはこちらから

 

Running Universal Dashboard in a Docker Instance

$
0
0

Universal Dashboard

Adam Driscoll the creator of the PowerShell Pro Tools for Visual Studio has created a new feature Universal Dashboard.
The PowerShell Pro Tools Universal Dashboard PowerShell module allows for creation of web-based dashboards. The client and server side code for the dashboard is authored all PowerShell. Charts, monitors, tables and grids can easily be created with the cmdlets included with the module. The module is cross-platform and will run anywhere PowerShell Core can run.

With this PowerShell Module you can easily create awesome Dashboards. After creating some cool Dashboards I thought it would be cool to test if you could also run the Universal Dashboard module in a Docker Instance.

Docker

I have a Windows 10 machine so I used the Docker for Windows Client to create Docker Containers on my Windows machine.

Docker is a software technology providing containers, promoted by the company Docker, Inc. Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. [from Wikipedia] Please check the references if you want to learn more about Docker.

Example Universal Dashboard

With below Script we can create a Dashboard which retrieves Microsoft Stock prices calling a Web Service.

<#
    Example Dashboard for showing Microsoft Stock Value last 6 months.
    Links:
    - Universial Dashboard: https://adamdriscoll.gitbooks.io/powershell-tools-documentation/content/powershell-pro-tools-documentation/universal-dashboard.html
    - Stock API: https://iextrading.com/developer/
#>
$Dashboard = Start-Dashboard -Content {
    New-Dashboard -Title "Stockprice Dashboard" -Color '#FF050F7F' -Content {
        #Insert HTML Code
        New-Html -Markup '<h1>Running Universal Dashboard in Container!</h1>'
        New-Row {
            New-Column -Size 12 -Content {
                New-Chart -Type Line -Title "Stock Values - 6 months" -Endpoint {
                    (Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get) |
                        Out-ChartData -LabelProperty "date" -DataProperty "close"
                }
            }
        }
        New-Row {
            New-Column -Size 12 {
                New-Grid -Title "StockPrice MSFT" `
                    -Headers @('Date', 'Close Stock Value', 'Low Stock Value', 'High Stock Value') -Properties @('date', 'close', 'low', 'high') `
                    -DefaultSortColumn 'Date' -DefaultSortDescending  `
                    -Endpoint {
                    $StockData = Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get
                    $StockData | Out-GridData
                }
            }
        }

    }
} -Port 9292

#Open Dashboard
Start-Process http://localhost:9292


#Stop Dashboard
#Stop-Dashboard -Server $Dashboard

If we run above script after installing the UniversalDashboard PowerShell Module and setting the Universal Dashboard License we get the following Dashboard.

Running Universal Dashboard in Docker Instance

We are going to use the microsoft/powerShell image to run our Universal Dashboard. This Docker image contains PowerShell Core.

Make sure you have installed the Docker Client for Windows and you have mounted your c-drive in the Docker Client. We will store the Universal Dashboard script on our c-drive.

High-level steps:

  1. Install the Docker for Windows Client
  2. Configure the Shared Drives (Map C-drive)
  3. Start Docker Windows Client
  4. Create Dashboard Script and store on your Windows C-drive.
  5. Download and install microsoft/powershell Docker image
  6. Start Docker microsoft/powershell instance
  7. Start Dashboard PowerShell script on Docker microsoft/powershell running instance.

I hope you where able to execute steps 1. to 3. yourself so I'll continue with how your Dashboard Script needs to look like.

<#
    Example Dashboard for showing Microsoft Stock Value last 6 months.
    Links:
    - Universial Dashboard: https://adamdriscoll.gitbooks.io/powershell-tools-documentation/content/powershell-pro-tools-documentation/universal-dashboard.html
    - Stock API: https://iextrading.com/developer/
#>

#region install Universal Dashboard Module from PSGallery
Install-Module UniversalDashboard -Scope AllUsers -Force
#endregion

#region Set License
$License = Get-Content -Path '/data/license.txt'
Set-UDLicense -License $License
#endregion

#region Universal Dashboard
Start-Dashboard -Content {
    New-Dashboard -Title "Stockprice Dashboard" -Color '#FF050F7F' -Content {
        #Insert HTML Code
        New-Html -Markup '<h1>Running Universal Dashboard on Docker Instance!</h1>'
        New-Row {
            New-Column -Size 12 -Content {
                New-Chart -Type Line -Title "Stock Values - 6 months" -Endpoint {
                    (Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get) |
                        Out-ChartData -LabelProperty "date" -DataProperty "close"
                }
            }
        }
        New-Row {
            New-Column -Size 12 {
                New-Grid -Title "StockPrice MSFT" `
                    -Headers @('Date', 'Close Stock Value', 'Low Stock Value', 'High Stock Value') -Properties @('date', 'close', 'low', 'high') `
                    -DefaultSortColumn 'Date' -DefaultSortDescending  `
                    -Endpoint {
                    $StockData = Invoke-RestMethod -Uri 'https://api.iextrading.com/1.0/stock/msft/chart/6m' -Method Get
                    $StockData | Out-GridData
                }
            }
        }

    }
} -Port 9090 -Wait
#endregion

This script will download and install the UniversalDashboard Module from the PSGallery and configure the license and start the Dashboard listening on port 9090.

Save the dockeruniversaldashboard.ps1 file on your c-drive.

Now it's time to start Docker and download the latest microsoft/powershell Docker image from the Docker Hub.

Open your PowerShell Command prompt (as Administrator) and search for the microsoft/powershell image.

If you have not downloaded the microsoft/powershell image run

docker pull microsoft/powershell

in the PowerShell Console.

With the command

docker images

you can see your installed Docker images.

Now we can run the following docker commands to start our Universal Dashboard hosted in the Docker microsoft/powershell instance.

docker run -d -p 9090:9090 --rm --name ud -i -v c:/Temp:/data microsoft/powershell
docker exec -i ud  powershell -file ./data/dockeruniversaldashboard.ps1

Above docker commands start the Docker container with the ports mapped and the c:temp folder mapped where we saved the dockeruniversaldashboard.ps1 file we created earlier.

The second command starts the PowerShell script in the Docker instance.

Here we see an animated gif showing  the end result.

Go buy your PowerShell Pro License and start creating cool Dashboards.

References:

Ignite 2017: Matt’s list of recommended sessions

$
0
0

Introduction: Microsoft Ignite 2017 is right around the corner, September 25 – 29 in Orlando Florida. While there are over 1536 sessions, I wanted to share with you the list of sessions that I will either be attending in-person or watching the on-demand version later when I get home. Please feel free to use this list to help create your personal schedule, or on-demand viewing list later. Also, be sure to follow me on Twitter @SosemanMatt for updates while at Ignite. Enjoy!

Matt's Tip: Every year I spend ~200 hours watching Ignite sessions while running on the treadmill every evening or on an early Saturday morning to ensure I stay up to speed and keep my skills sharp. These sessions are addicting, and fun! They inspire me to go out and learn more, lab up a scenario, and gives me great stories to share with my peers, customers and partners.

Note: My list will be targeted at Microsoft 365 sessions, although I will have some other topics sprinkled about. Click each session to be taken directly to that session's page on the Microsoft Ignite website.

Link to the Microsoft Ignite website: https://www.microsoft.com/en-us/ignite/default.aspx

Must See:

(If you don't have time to watch anything else, watch these sessions.)

Office 365:

Microsoft Planner:

Microsoft Teams:

Bots:

SharePoint/OneDrive:

Security/Compliance:

Device Management:

Skype for Business:

Microsoft Stream:

StaffHub:

Surface:

Windows:

Edge:

Top Contributors Awards! September’2017 Week 2

$
0
0

Welcome back for another analysis of contributions to TechNet Wiki over the last week.

First up, the weekly leader board snapshot...

 

As always, here are the results of another weekly crawl over the updated articles feed.

 

Ninja Award Most Revisions Award
Who has made the most individual revisions

 

#1 Kapil.Kumawat with 137 revisions.

 

#2 M.Vignesh with 111 revisions.

 

#3 Arleta Wanat with 105 revisions.

 

Just behind the winners but also worth a mention are:

 

#4 Peter Geelen with 47 revisions.

 

#5 Ken Cenerelli with 40 revisions.

 

#6 Nourdine MHOUMADI with 33 revisions.

 

#7 Sabah Shariq with 20 revisions.

 

#8 Maruthachalam with 18 revisions.

 

#9 RajeeshMenoth with 14 revisions.

 

#10 Nonki Takahashi with 9 revisions.

 

 

Ninja Award Most Articles Updated Award
Who has updated the most articles

 

#1 Kapil.Kumawat with 61 articles.

 

#2 M.Vignesh with 55 articles.

 

#3 Peter Geelen with 11 articles.

 

Just behind the winners but also worth a mention are:

 

#4 Arleta Wanat with 11 articles.

 

#5 Nourdine MHOUMADI with 10 articles.

 

#6 Ken Cenerelli with 9 articles.

 

#7 Sabah Shariq with 7 articles.

 

#8 RajeeshMenoth with 3 articles.

 

#9 Nonki Takahashi with 2 articles.

 

#10 Rauf Khalafov with 1 articles.

 

 

Ninja Award Most Updated Article Award
Largest amount of updated content in a single article

 

The article to have the most change this week was Integração DWD (Dns, Wins e Dhcp) - Dicas poderosas., by FÁBIOFOL

This week's reviser was Kapil.Kumawat

 

 

Ninja Award Longest Article Award
Biggest article updated this week

 

This week's largest document to get some attention is PowerShell: How to Create and Use Classes , by Brian Nadjiwon

This week's revisers were Kapil.Kumawat & Peter Geelen

 

 

Ninja Award Most Revised Article Award
Article with the most revisions in a week

 

This week's most fiddled with article is How to design your data structure in Azure Cosmos DB, by HansamaliGamage. It was revised 18 times last week.

This week's revisers were M.Vignesh & HansamaliGamage

 

 

Ninja Award Most Popular Article Award
Collaboration is the name of the game!

 

The article to be updated by the most people this week is SQL: Handling comma while creating CSV, by Atif-ullah Sheikh

This week's revisers were M.Vignesh, Kapil.Kumawat, Ken Cenerelli, Richard Mueller, Peter Geelen & Atif-ullah Sheikh

 

Ninja Award Ninja Edit Award
A ninja needs lightning fast reactions!

 

Below is a list of this week's fastest ninja edits. That's an edit to an article after another person

 

Ninja Award Winner Summary
Let's celebrate our winners!

 

Below are a few statistics on this week's award winners.

Most Revisions Award Winner
The reviser is the winner of this category.

Kapil.Kumawat

Kapil.Kumawat has won 3 previous Top Contributor Awards:

Kapil.Kumawat has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Kapil.Kumawat's profile page

Most Articles Award Winner
The reviser is the winner of this category.

Kapil.Kumawat

Kapil.Kumawat is mentioned above.

Most Updated Article Award Winner
The author is the winner, as it is their article that has had the changes.

FÁBIOFOL

FABIOFOL has won 3 previous Top Contributor Awards:

FABIOFOL has not yet had any interviews, featured articles or TechNet Guru medals (see below)

FABIOFOL's profile page

Longest Article Award Winner
The author is the winner, as it is their article that is so long!

Brian Nadjiwon

This is the first Top Contributors award for Brian Nadjiwon on TechNet Wiki! Congratulations Brian Nadjiwon!

Brian Nadjiwon has TechNet Guru medals, for the following articles:

Brian Nadjiwon has not yet had any interviews or featured articles (see below)

Brian Nadjiwon's profile page

Most Revised Article Winner
The author is the winner, as it is their article that has ben changed the most

HansamaliGamage

Hansamali has won 11 previous Top Contributor Awards. Most recent five shown below:

Hansamali has TechNet Guru medals, for the following articles:

Hansamali has not yet had any interviews or featured articles (see below)

Hansamali's profile page

Most Popular Article Winner
The author is the winner, as it is their article that has had the most attention.

Atif-ullah Sheikh

This is the first Top Contributors award for Atif-ullah Sheikh on TechNet Wiki! Congratulations Atif-ullah Sheikh!

Atif-ullah Sheikh has not yet had any interviews, featured articles or TechNet Guru medals (see below)

Atif-ullah Sheikh's profile page

Ninja Edit Award Winner
The author is the reviser, for it is their hand that is quickest!

M.Vignesh

M.Vignesh has won 19 previous Top Contributor Awards. Most recent five shown below:

M.Vignesh has not yet had any interviews, featured articles or TechNet Guru medals (see below)

M.Vignesh's profile page

 

Another great week from all in our community! Thank you all for so much great literature for us to read this week!
Please keep reading and contributing!

 

Best regards,
— Ninja [Kamlesh Kumar]

 

強固なパートナーシップの構築が成功を呼び寄せる理由【9/17 更新】

$
0
0

(この記事は 2017 年 7 月31 日にMicrosoft Partner Network blog に掲載された記事 Why Building Strong Partnerships Will Help You Succeed の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

私はこの業界に 25 年近く身を置き、たくさんのことを教わってきましたが、最近になって「昨日の競合他社は今日のパートナー」だということを実感するようになりました。かつてマイクロソフト パートナーとなった企業は、立場上できるだけ多くのマイクロソフト製品に対応することを奨励、期待され、できるだけ多くのバッジを取得しようと目指したものでした。しかし、それが持続可能なビジネス モデルであったのは過去のことです。今日では、実際問題どの程度のマイクロソフト製品をカバーし実績を上げられるのか、自ら判断する必要があるのです。

 

ビジネス環境の変化

ビジネスのクラウドへの転換については、ここ数年さまざまな議論が交わされています。「事情通」と言われるほどマイクロソフトと良好なパートナー関係を構築している当社も、ビジネス変革のアドバイスに従いました。プロジェクト ベースのサービスからマネージド サービスへの変革、またオンプレミス ソリューションからクラウド ソリューションへの変革は非常に重要なことでした。しかし、正直なところ、そんな私たちでも今日のテクノロジ業界における変化の速さには驚かされています。

クラウド ソリューションが普及し、マイクロソフトや他のベンダーがほぼ毎月のペースでテクノロジの更新プログラムを適用できるようになったことで、パートナー コミュニティはすぐさま最新情報を入手して、遅れをとることがないよう対処しています。エンジニアやコンサルタントがこれらの変化を把握し、テクノロジに関するトレーニングや認定の取得を行う必要があるのはもちろんのこと、知的財産 (IP) も定期的に保守、更新する必要があります。たとえば、テクノロジが変化すれば、実地調査のチェックリスト、構成チェックリスト、ドキュメントのテンプレートなどをすべて更新する必要があります。テクノロジが毎月 (場合によってはそれ以上の頻度で) 変化すると、資料更新のために毎週のように業務時間を費やさなくてはなりません。そうした作業が終わらないままだと、自社のビジネスへの影響を大きく見誤ることにもなります。

これは、セールス チームやマーケティング チームにもかかわることです。と言うのも、販売活動に使用する資料はすべて、彼らが作成しています。これらのチームは、テクノロジの変化を常に把握し、企業パンフレット、提案書のテンプレート、購入契約書などの内容をチェックして、テクノロジの機能や展開方法の変化が反映されていることを確認する必要があります。さらにテクノロジの変化は、エンジニアリング チームや競争力に多大な影響を及ぼす部門だけでなく、企業全体にも広くかかわってきます。

 

強固な P2P 関係の形成

では、マイクロソフト パートナーはどうすれば良いのでしょうか。当社が出した答えは、対応する製品を絞り、他のパートナーと協力することです。

最近、近隣の主要なマイクロソフト パートナー数社とお会いして、この難問について意見を交わす機会がありました。お話しした全員が当社と同じ解決策に至る結果となり、当社の考えに間違いはなかったと安心しました。その解決策とは、自社が得意とする主要なワークロードを 1 つ、2 つ選んで社内での対応を継続し、残りを外部に委託するというものです。

現在では、当社のサービスを補完するうえで必要な業務を外部委託するために、非常に重要な戦略的パートナーシップを締結しています。当初はパートナーを慎重に選択する必要がありました。これらの企業は当社の名前でサービスを提供することになるうえ、中にはかつての競合他社も含まれていたからです。提携するにあたっては、当社と同レベルの品質を実現しているかを確認する必要がありました。そのために、最初の数回のプロジェクトを注意深く観察し、十分なコミュニケーションが取れていることや、プロジェクト全体を通じて品質の高さが維持されていることを確認しました。こうして信頼を確立し、パートナーとお互いのことを理解するようになってからは、すべてが順調に進みました。

「ATSG とのパートナーシップにより、当社の売り上げは増加し、Windows および System Center を担当するエンジニアの生産性が向上しました。現在は、このパートナーシップを活用して ATSG のマネージド サービスと Office 365 ソリューションのクロスセルを行うことを模索しています。これは、VDX、ATSG、そして共通のお客様にとって Win-Win-Win の関係です」

– VDX、社長兼 CEO、Rob English 氏

こうした関係を最大限に活用する方法は、お互いのビジネスを相互に委託し、揺るぎない Win-Win のビジネス関係を確立することです。当社の場合は、専門外のテクノロジに関するプロジェクトを外部に委託しています。反対に、当社の専門分野である Office 365 やその他のテクノロジ関連のプロジェクトをパートナーから委託されています。

 

パートナーの発見

私の経験では、信頼はごく短期間のうちに自然と構築されます。まず、約束したことを実行するというのは非常に良い兆候です。たとえば、訪問や電話会議の時間を守ったり、こちらが催促しなくても締め切りまでに担当の作業や依頼をこなしたりといったことは、いずれも信頼に足るパートナーの資質です。

私は通常、マイクロソフトとのつながり (現地支社や本社) や IAMCP に参加しているメンバーなど、信頼できる企業を通じてパートナーを見つけています。有望なパートナーは意外と身近なところにいます。お客様が求めているのは高品質のソリューションであり、それをまとめて提供してくれる企業です。

当社の場合、ソリューションの一部をパートナーから提供することをお客様に伝えても、特に懸念を抱かれることはありません。今日では、そうしたビジネス形態が主流となっています。パートナーが当社と同水準の品質でプロジェクトを遂行できることを請け合えば、お客様は私の判断を信用して了承してくださいます。また、当社のビジネスを保護するために、プロジェクトごとに NDA、パートナーシップ基本契約書、書面による SOW を作成しています。

 

今後について

この状況は、私がかつて想像した 2017 年の当社の姿とは明らかに異なります。しかし、これはお客様のさまざまなテクノロジ導入を促進し、急速に変化する市場で競争力を維持できるように支援するための方策です。どのパートナーに作業を任せるかを管理し、お客様を最優先に考えれば、「Win-Win-Win」の関係を実現できます。

皆様のビジネス ソリューションに適したパートナーの見つけ方については、マイクロソフト パートナー コミュニティ (英語) でビジネス エキスパートとつながり、詳細をお尋ねください。

 

 

 

 

 

 


Communication Site is Launching

$
0
0

It was announced during the last SharePoint virtual summit in June the launch of new template for SharePoint Online site. Per the plan that the new template is going to roll up for first release users then to all customers, let us have a look in this blog on the new template.

What is the Communication Site?

It is a site template that you can create from SharePoint landing page in your Office 365 Tenant and it allows to publish dynamic beautiful content to your colleagues in your organization to keep them engaged about topics, events and projects.

You can choose one of 3 designs:

  1. Topic:  Use this design if you have a lot of information to share such as news, events, and other content.
  2. Showcase: Use this design to showcase a product, team, or event using photos or images.
  3. Blank: Start with a blank site and make your design come to life quickly and easily.

I have created a Topic site and this is how it looks like when I created it.

If I scroll down little bit I see this too, where I can manage news, events and documents.

 

With this new site template you can do a lot of stuff like:

  • Consume, create and connect from your mobile device via the SharePoint apps.
  • Communication sites help further refine and enhance your message.
  • Make your home page and sub-pages look great.
  • Dynamically pull in and display data, documents and information via web part improvements as example you can integrate Power BI reports to bring your interactive reports, also, you can add Office 365 Videos.

 

I am putting here some links that can help you to learn more about this new template

 

SharePoint communication sites begin rollout to Office 365 customers

Reach your audience via SharePoint communication sites in Office 365

 SharePoint Online: Communication Sites

 

By John Naguib (Twitter,  TechNet Profile,  MVP Profile)

Office 365: Fatal error RecipientNotFoundPermanentException has occurred

$
0
0

In Office 365 administrators may experience almost immediate failures of migrations to Office 365.  The error is typically in the following format:

 

Error: Cannot find a recipient that has mailbox GUID 'MAILBOX-GUID'.

 

The migration log shows the following information at the end of the log regarding the failure exception”":

 

9/11/2017 9:13:58 PM [DM5PR04MB0988] Fatal error RecipientNotFoundPermanentException has occurred.

 

The error itself is quite misleading at casual glance.  The recipient can be located on premises as well as in Office 365.  If the recipient can easily be located – why is the migration failing with this error?

 

Further review of the migration log shows entries similar to the following:

 

9/11/2017 9:13:54 PM [DM5PR04MB0988] Content from the Shard mailbox (Mailbox Guid: MAILBOX-GUID, Database Guid: DATABSE-GUID) will be merged into the target mailbox.

 

The GUID listed in MAILBOX-GUID for the SHARD mailbox matches the MAILBOX-GUID in the error.  We cannot locate the SHARD mailbox.  What is a SHARD mailbox?

 

In Office 365 on premises mailbox are represented by mail user objects.  It is possible that you have migrated mailboxes to Office 365 and these users are collaborating with users that have not yet been migrated.  The services they are collaborating on may store data within the Exchange Online mailbox.  A user that has not been migrated – being a mail user object – typically does not have a mailbox to store data in.  In order to facilitate this collaboration we create a special mailbox known as the SHARD mailbox.  The administrator can view SHARD mailboxes using the get-MailboxLocations commandlet.

 

Get-MailboxLocations –identity ALIAS

 

RunspaceId                    : d075c233-05d4-4b41-9c2d-3fbc930d593f
Id                            : GUIDGUID
MailboxGuid                   : GUID
DatabaseLocation              : NAMPR06DG007-db108
MailboxLocationType           : ComponentShared
OwnerId                       : NAME
TenantGuid                    : GUID
MailboxMoveBatchName          :
MailboxMoveStatus             : None
MailboxMoveFlags              : None
RawExternalEmailAddress       :
MigrationDryRun               : False
OptInUser                     : False
IsMigratedConsumerMailbox     : False
IsPremiumConsumerMailbox      : False
PrimaryMailboxSource          : None
MailboxProvisioningConstraint :
SiloName                      :
Identity                      : GUIDGUID
IsValid                       : True
ObjectState                   : New

 

When the administrator provisions a migration to Office 365 for a user with a SHARD mailbox the migration process attempts to locate the mailbox in the defined database.  As a post migration activity we will merge the contents of the SHARD mailbox into the primary mailbox that was migrated.  The SHARD mailbox is then decommissioned as it is no longer necessary.  The error we are receiving in this case  is the result of the migration process being unable to locate the SHARD mailbox.

 

In order to correct this condition administrators should open a support ticket and work with product support services.

Using Frequency Analysis to Defend Office 365

$
0
0

As security threats evolve, so must defense. In Office 365, we have engineering teams dedicated to building intrusion detection systems that protect customer data against new and existing threats. In this blog, we are talking about a security monitoring challenge of cloud services and our recent attempt to solve it.

Let us start with two scenarios. In the first scenario, an adversary sneakily adds or updates a registry key to run his malware automatically. In the second scenario, an adversary sneakily injects a malicious DLL into a long running legitimate process to maintain a persistent backdoor. The traditional signature-based detections work well for known threats. To detect unknown threats, scheduled frequency analysis across machines is a practical approach, since it is unlikely that a malicious registry key or loaded DLL will be present on most of systems in our fleet. However, this approach usually requires collecting more monitoring data. For example, if we are capturing a daily snapshot of loaded DLLs for all running processes, we could easily log 10,000 events per machine per day. What if there are more similar detections? The data volume will stress our logging, uploading and analytics infrastructures.

To meet this challenge, we developed a "prevalence system" that enables us to identify common registry keys and DLL paths across our fleet and reduce the volume of data that must be logged. In the first part of this post, we present an initial prototype. In the following three parts, we discuss our solutions to three critical weaknesses in the initial prototype. Finally, we share some results of our experiments.

Initial Prototype

Our security agent periodically snapshots registry key values and loaded DLLs. Since most entries are the same across our fleet, we would like our agent to only log those that are unique to that machine. We call this measurement "prevalence". The system must maintain a secure, reliable, up-to-date list of prevalent entries. We call this the “prevalence web service”.

As shown in Figure 1, the initial prototype is not complicated and is comprised of three parts: prevalence web service, security monitoring agent and map/reduce system. The security agent is responsible for periodically outputting a snapshot of loaded DLLs. It calls the prevalence web service and retrieves a list of prevalent DLL paths. Next, it removes these prevalent DLL paths from the snapshot and only logs the DLL paths that are not in the list. Then those logged events are uploaded to security event hub. Map/reduce jobs determine which paths are prevalent across the fleet and which paths are uncommon. These newly-identified prevalent paths are uploaded to prevalence web service.

Initial Prototype of Prevalence System

Figure 1: Initial prototype of prevalence system

The overall design is trivial. But there are several critical problems to solve to ensure the system is reliable. One critical and common scenario is what to do when prevalence web service is unavailable. Another challenge with the prototype is how to maintain a relatively small prevalence list. Moreover, how to reduce detection false-positive is also important.

Round-Robin Logging

There are many reasons why the prevalence web service may be unavailable. For example, a network connectivity issue may prevent the service from being reached. Moreover, the prevalence web service might be under upgrade and be unavailable for all clients.

When the prevalence web service is unavailable, the agent could choose to log every entry (treating all of them as anomalous) or log no entries (treating all of them as prevalent). If we simply skip logging all snapshot events, then we lose coverage for that detection. The goal is to find a balance between resources and detection coverage. We implemented a consistent round-robin approach for event logging when prevalence is unavailable.

Suppose we have 1600 different loaded DLLs for all running processes in machine A. We would like to split these events into 16 buckets where every bucket has 100 events. Bucket 0 will be logged in day 1; bucket 1 will be logged in day 2 …; bucket 15 will be logged in day 16. One cycle completes in 16 days. A given entry will be logged exactly once in a 16-day cycle.

Round-robin logging when prevalence is unavailable

Figure 2: Round-robin logging when prevalence is unavailable

The event-bucket mapping algorithm needs to be consistent. First, it must map the events uniformly into different buckets. We could use the .NET GetHashCode() method to uniformly hash DLL paths. However, this hash code is not guaranteed to be unique across machines. We need an algorithm that maps the same DLL path to the same bucket on every machine. This ensures that the same DLL will be uploaded to map/reduce systems on the same day so that the frequency analysis is reliable. We chose to use a cryptographic hash since it will return the same value for a given path on every machine.

Trimming the Prevalence List

Another challenge is the size of the prevalence list. The DLL list will grow as new versions of services are deployed. Storing a growing prevalence list in prevalence server is not a problem. However, fetching an ever-growing prevalence list will lead to network and memory pressure in the security monitoring agent.

One approach is to record the current timestamp when each prevalence entry is added to the service, and design the web method to only retrieve recent entries. However, these prevalent DLL paths will not be logged once they have been uploaded to the prevalence service. They will not be identified as prevalent entries again by the map/reduce jobs in the security event hub (see Figure 1). So, their timestamps will not be updated in the prevalence web service and they will become stale finally.

One solution is to log a subset of DLLs even if they are on the prevalence list. As shown in Figure 3, we use the round-robin event logging technique mentioned in the previous section when the DLL matches a prevalence entry. Then 1/16th of prevalence entries will be logged and uploaded to event hubs every day consistently. The timestamp of these entries in prevalence service will be updated every 16 days. Finally, we could ensure those ever-prevalent DLL paths always stay in prevalence web service with recent timestamps.

Round-robin logging of prevalent entries

Figure 3: Round-robin logging of prevalent entries

Reducing False-Positive

False-positive results can severely limit an intrusion detection system’s effectiveness. Random GUIDs and version numbers present in registry and DLL paths present a challenge for systems based on frequency analysis. One solution is to normalize registry and DLL paths by removing these portions. Another approach would be to compare against historical data and only flag those which randomly appear in history.

Reducing Event Volume

The prevalence system significantly reduces the amount of data we log for loaded DLLs and registry values. In our fleet of over 100,000 systems, this approach reduced the logging volume by 75%.

With the help of the prevalence system, we can now build high-fidelity detections for generic DLL side loading, phantom DLL hijacking and registry hijacking. The prevalence system can be applied to other scenarios where a large volume of similar events need to be captured from a large fleet of similar machines.

Conclusion

To better defend Office 365, we are constantly improving the techniques we use to identify unauthorized activities in our datacenter. This prevalence approach allows us to increase the amount of data that we analyze without overwhelming our telemetry pipeline. We are still iterating on this approach - if you have feedbacks or suggestions, we would love to hear from you.

パートナーシップを活かして中小企業市場を開拓【9/18 更新】

$
0
0

(この記事は2017 年 8 月1 日にMicrosoft Partner Network blog に掲載された記事 Partnering to Navigate the Small Business Marketplace の翻訳です。最新情報についてはリンク元のページをご参照ください。)

 

25 年以上にわたってパートナー エコシステムに携わってきた者として、私はグローバルなパートナー エコシステムの進化を心から嬉しく思っています。固有の課題を抱える SMB 市場の拡大は、非常に興味深いものがあります。マイクロソフト パートナー様は、中小企業に特化した環境でソリューションを提供するというビジネス チャンスを積極的に利用しています。その結果、あらゆる中小企業がビジョンの実現を可能にするテクノロジを使用してビジネスを運営できるようになりました。

マイクロソフト パートナー ネットワーク ポッドキャスト"Navigating the Small Business Marketplace with Alyssa Fitzpatrick"で、パートナー様がどのように SMB 分野の可能性を追求していけるかについてご説明しています。

 

SMB 市場の独自のニーズを理解する

SMB 市場は商談規模が小さく儲けが少ないように思われがちですが、中小企業の数は想像以上に多く、テクノロジ業界の顧客ベースの大部分を占めるほどです。米国中小企業庁 (SBA、英語) の試算によると、米国には約 2,800 万社の中小企業が存在し、国内における総売上高の 50% 以上を占めています。1970 年代以降、米国では中小企業が全雇用の 55% を担い、純新規雇用の 66% を受け入れています。世界的に見ると、中小企業が占める市場シェアの比率はさらに高く、その大部分がビジネス目標を達成するための何らかのテクノロジを必要としています。

しかし、SMB 市場での販売には固有のニーズが絡んでくるため、この分野での成長を目指すパートナー様は以下に挙げる点に注意する必要があります。

 

  1. 画一的なソリューションは通用しない

エンタープライズのお客様と同じように、SMB のお客様にとってもカスタマイズは非常に重要です。ただし、SMB の方がテクノロジに対する要望は厳しくなります。中小企業は、自社にとって有用なソリューションを求めており、コストに見合う機能を備えたソリューションを敏感に見つけ出します。そのため、お客様のニーズに合ったソリューションを販売できているか確認することをお勧めします。お客様の問題を完全に解決する適切なテクノロジを適切なタイミングで販売することに重点を置きましょう。

 

  1. イノベーションが最優先

SMB がテクノロジに求めるのは、新機能と実用的な機能です。エンタープライズのお客様にとっては定番のサービスや製品であっても、SMB が目標を達成したり他社との差別化を図ったりするうえでは役に立たないケースも少なくありません。イノベーションこそが重要であり、シームレスな展開と容易な統合が求められます。SMB のお客様はパートナー様に、デジタル変革を導くテクノロジのエキスパートとして、考えもしなかったソリューションを実現してくれることを期待しています。そのため、お客様のビジネスに変革をもたらす最新の優れたソリューションを提供できているか確認しましょう。

 

  1. 姿勢も重要

企業の規模を問わず、効果的な販売を行うためには、お客様の業界に合わせることが重要です。お客様の立場で考え、課題を徹底的に理解する必要があります。そうでなければ、問題を解決しようとしていることにはならず、自社本位の売り込みをしていると見られても否定できません。お客様の業界に合わせるには、お客様の専門分野について理解する必要があります。1 つの業種に特化しているのか、従業員はリモート ワーカーなのか、どのような地域をターゲットにしているのかなどを確認してください。こういった要素がお客様のビジネスに違いを生んでいるため、サービスの販売モデルに反映する必要があります。お客様と同じように考え、ニーズを満たすと共に、訪問する前からお客様のビジネスをどのように支援できるかを正確に理解するようにしましょう。

 

カスタマー ジャーニーの捉え方を変える

SMB のお客様は、テクノロジ パートナーと実際に会う時点で、購買の意思決定の 80% をオンラインで済ませています。つまり、SMB への営業活動ではデジタル プレゼンスの強化が重要になります。多くの SMB 企業ではミレニアル世代が管理職に就くようになってきたため、投資対象のソリューションや購買プロセス自体が変化していることに注意してください。

今日の SMB は、ものの考え方がまったく異なります。SMB の購買担当者の多くはモバイル デバイスで答えを探し、ソーシャル メディアを通じてテクノロジ企業とやり取りを行う機会が多く、クラウド コンピューティングなどの新しいテクノロジを迅速に採用します。こうした実情を踏まえたうえで、販売モデルを構築する必要があります。また、クラウド サブスクリプション モデルの柔軟性と応答性の高さは、中小企業にとって非常に魅力的です。

従来の常識を打ち破るような考え方が必要になる場合があるものの、SMB 市場はお客様と深くつながることを望むパートナー様にとっては絶好のビジネス チャンスです。SMB のお客様とつながる方法の詳細については、マイクロソフト パートナー ネットワーク ポッドキャストの最新エピソードをご視聴ください。ポッドキャストを定期視聴すると、最先端のビジネスやテクノロジをテーマとした業界のエキスパートやソート リーダーとの対談を毎週ダウンロードできます。過去のエピソードは、iTunesSoundCloud (英語)iHeartRadio (英語)Google Play MusicYouTube からダウンロードできます。お聞きになったうえでの評価もぜひご投稿ください。

 

 

SMB 市場についての他のパートナー様との意見交換は、マイクロソフト パートナー コミュニティ (英語) で行っていただけます。

 

 

2017年10月份合作伙伴培训计划

$
0
0

欢迎访问我们的最新在线课程列表。如果您希望参加以下课程,请点击课程名称,并使用您的公司邮箱和名称进行注册。

10月份在线课程

课程名称 课程日期
Microsoft Azure Platform系列课程
Power BI 入门 2017/10/17 14:00-16:00
Power BI深入介绍 2017/10/18 14:00-16:00
成长,管理和积极利用Azure的IUR提高企业的盈利 2017/10/19 10:00-12:00
Azure IaaS简介 2017/10/26 10:00-12:00
Azure新功能介绍 2017/10/27 10:00-12:00
Azure IoT应用的架构和Azure IoT Suite概览 2017/10/31 14:00-16:00
Office 365系列课程
Introduction to Exchange Online 2017/10/11 02:00-04:00
SharePoint Online入门: 通过SharePoint 2016迁移到云端 2017/10/12 02:00-04:00
Designing Skype for Business Network Requirements 2017/10/18 02:00-04:00
SharePoint Online入门: SharePoint Online外部共享以及面向合作伙伴的站点 2017/10/19 02:00-04:00
Global: Ask The Experts on Exchange Online: Exchange Online Protection Overview 2017/10/25 02:00-04:00
SharePoint Online入门: 通过OneDrive for Business迁移到云端 2017/10/26 02:00-04:00
Skype for Business Hybrid Deep Dive 2017/10/27 02:00-04:00

11月份在线课程

课程名称 课程日期
Enterprise Mobility Suite系列课程
初探企业移动和安全(EMS)及其价值定位 2017/11/27 10:00-12:00
Microsoft Azure Platform系列课程
应用Azure IoT Hub开发物联网应用 2017/11/01 14:00-16:00
Azure站点恢复简介 2017/11/02 10:00-12:00
Azure备份简介 2017/11/03 10:00-12:00
混合云存储技术解析 2017/11/09 10:00-12:00
迁移应用程序至Azure 2017/11/14 14:00-16:00
迁移应用程序数据库至Azure 2017/11/15 14:00-16:00
Windows Server 2016的混合云基础设施的技术解析 2017/11/22 10:00-12:00
容器和Nano服务器的技术解析 2017/11/23 10:00-12:00
混合云基础架构的存储和网络技术解析 2017/11/24 10:00-12:00
微服务和Azure Service Fabric 2017/11/29 14:00-16:00
Office 365系列课程
SharePoint Online入门: Office 365团队组织合作 2017/11/02 02:00-04:00
Enhance Your Business with Skype for Business Online Academy: Cloud Connector Edition Setup and Troubleshooting 2017/11/07 02:00-04:00
Exchange Online Protection Deep Dive 2017/11/08 02:00-04:00
SharePoint Online入门: 如何在SharePoint Online和OneDrive for Business中保护您的数据 2017/11/09 02:00-04:00
Skype for Business Deployment Method 2017/11/10 02:00-04:00
Skype for Business Telecommunication Fundamentals 2017/11/15 02:00-04:00
深入了解SharePoint Online: 混合OneDrive for Business以及混合站点 2017/11/16 02:00-04:00
Migrating to Exchange Online 2017/11/22 02:00-04:00
深入了解SharePoint Online: 通过PowerShell来迁移您的SharePoint Online 2017/11/23 02:00-04:00
Skype for Business Edge Architectures Configuration 2017/11/29 02:00-04:00
Skype for Business Meetings and Voice 2017/11/30 02:00-04:00
深入了解SharePoint Online: SharePoint 2016混合搜索 2017/11/30 02:00-04:00

12月份在线课程

课程名称 课程日期
Microsoft Azure Platform系列课程
Introduction to Cortana Intelligence Suite 2017/12/04 10:00-12:00
混合云管理与安全:简介与日志分析服务 2017/12/06 10:00-12:00
混合云管理与安全:自动化与安全 2017/12/07 10:00-12:00
Office 365系列课程
Skype for Business Architectures Design 2017/12/01 02:00-04:00
Exchange Online Hybrid Deployment Fundamentals 2017/12/06 02:00-04:00
SharePoint Online Ask the Experts: SharePoint Online Flow介绍 2017/12/07 02:00-04:00
Skype for Business Enterprise Voice 2017/12/13 02:00-04:00
SharePoint Online Ask the Experts: SharePoint Online PowerApps介绍 2017/12/14 02:00-04:00
Cloud PBX On Premises PSTN Connectivity 2017/12/20 02:00-04:00
SharePoint Online Ask the Experts: 深入了解用于Office 365的ADFS和AAD Connect 2017/12/21 02:00-04:00
SharePoint Online Ask the Experts: SharePoint Online与Azure B2B介绍 2017/12/28 02:00-04:00
Viewing all 36188 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>