LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings.
Click the + Create a resource button, search for Azure AI Search, and create a Azure AI Search resource with the following settings:
Subscription: Your Azure subscription.
Resource group: Select or create a resource group with a unique name.
Service name: A unique name.
Location: Choose any available region. If in eastern US, use “East US 2”.
Pricing tier: Basic
Select Review + create, and after you see the response Validation Success, select Create.
After deployment completes, select Go to resource. On the Azure AI Search overview page, you can add indexes, import data, and search created indexes.
✅ Create Azure AI Services Resource (Quick Steps)
Go to Azure portal home → Click + Create a resource.
Search Azure AI services → Select Create an Azure AI services plan.
Fill details: Subscription: Your Azure subscription, Resource group: Same as Azure AI Search, Region: Same location as Azure AI Search, Name: Unique name, Pricing tier: Standard S0, Check the acknowledgment box.
Click Review + create → then Create after validation.
Once deployment is done, view deployment details.
✅ Create a Storage Account (Quick Steps)
In Azure portal home, click + Create a resource.
Search Storage account and select Create.
Fill details: Subscription: Your Azure subscription, Resource group: The same resource group as your Azure AI Search and Azure AI services resources.Storage, account name: A unique name, Location: Choose any available location, Performance: Standard, Redundancy: Locally redundant storage (LRS)
Click Review + create → then Create.
After deployment, go to the resource → Select Configuration → Enable Allow Blob anonymous access → Save.
✅ Upload Documents to Azure Storage (Quick Steps)
In the left-hand menu pane, select Containers.
Select + Container. A pane on your right-hand side opens.
Enter the following settings, and click Create: Name: coffee-reviews, Public access level: Container (anonymous read access for containers and blobs), Advanced: no changes.
In a new browser tab, download the zipped coffee reviews from https://aka.ms/mslearn-coffee-reviews, reviews, and extract the files to the reviews folder.
In the Azure portal, select your coffee-reviews container. In the container, select Upload.
In the Upload blob pane, select Select a file.
In the Explorer window, select all the files in the reviews folder, select Open, and then select Upload.
After the upload is complete, you can close the Upload blob pane. Your documents are now in your coffee-reviews storage container.
Index the documents
In the Azure portal, browse to your Azure AI Search resource. On the Overview page, select Import data.
On the Connect to your data page, in the Data Source list, select Azure Blob Storage. Complete the data store details with the following values:
Data Source: Azure Blob Storage
Data source name: coffee-customer-data
Data to extract: Content and metadata
Parsing mode: Default
Connection string: *Select Choose an existing connection. Select your storage account, select the coffee-reviews container, and then click Select.
Managed identity authentication: None
Container name: this setting is auto-populated after you choose an existing connection.
Blob folder: Leave this blank.
Description: Reviews for Fourth Coffee shops.
Select Next: Add cognitive skills (Optional).
In the Attach AI Services section, select your Azure AI services resource.
In the Add enrichments section:
Change the Skillset name to coffee-skillset.
Select the checkbox Enable OCR and merge all text into merged_content field
Ensure that the Source data field is set to merged_content.
Change the Enrichment granularity level to Pages (5000 character chunks).
Under Save enrichments to a knowledge store, select: Image projections, Documents, Pages, Key phrases, Entities, Image details, Image references
Note - A warning asking for a Storage Account Connection String appears.
Select Choose an existing connection. Choose the storage account you created earlier.
1. Click on + Container to create a new container called knowledge-store with the privacy level set to Private, and select Create.
2. Select the knowledge-store container, and then click Select at the bottom of the screen.
Select Azure blob projections: Document (container name auto-filled, don't change).
Click Next: Customize target index. Change Index name to coffee-index.
Ensure Key is metadata_storage_path. Leave Suggester name blank and Search mode as is.
Mark these fields as filterable: content, locations, keyphrases, sentiment, merged_content, text, layoutText, imageTags, imageCaption.
Review and continue.
Select Next: Create an indexer.
Change the Indexer name to coffee-indexer.
Leave the Schedule set to Once.
Expand the Advanced options. Ensure that the Base-64 Encode Keys option is selected, as encoding keys can make the index more efficient.
Select Submit to create the data source, skillset, index, and indexer. The indexer is run automatically and runs the indexing pipeline, which:Extracts the document metadata fields and content from the data source.Runs the skillset of cognitive skills to generate more enriched fields.Maps the extracted fields to the index.
Return to your Azure AI Search resource page. On the left pane, under Search Management, select Indexers. Select the newly created coffee-indexer. Wait a minute, and select ↻ Refresh until the Status indicates success.
Select the indexer name to see more details.
Query the index
In your Search service’s Overview page, select Search explorer at the top of the screen.
Notice how the index selected is the coffee-index you created. Below the index selected, change the view to JSON view.
Select Search. The search query returns all the documents in the search index, including a count of all the documents in the @odata.count field. The search index should return a JSON document containing your search results.
Now let’s filter by location. In the JSON query editor field, copy and paste:
Select Search. The query searches all the documents in the index and filters for reviews with a Chicago location. You should see 3 in the @odata.count field.
Now let’s filter by sentiment. In the JSON query editor field, copy and paste:
Select Search. The query searches all the documents in the index and filters for reviews with a negative sentiment. You should see 1 in the @odata.count field.
One of the problems we might want to solve for is why there might be certain reviews. Let’s take a look at the key phrases associated with the negative review. What do you think might be the cause of the review?
Associate Lead Consultant | Microsoft Dynamics 365 F&O | FSCM | FA | Budgeting | HRMS | Payroll | Logic Apps | Azure DevOps(Boards, Process, Repos, Pipelines) | Azure Portal | AI beginner
2moWill follow it in coming weekend
Technical Consultant at AtiSunya Private Limited | Azure Developer | X++ Developer | ReactJs Developer
4moVery informative