Scholarly article on topic 'A general framework for intelligent recommender systems'

A general framework for intelligent recommender systems Academic research paper on "Computer and information sciences"

CC BY-NC-ND
0
0
Share paper
Academic journal
Applied Computing and Informatics
OECD Field of science
Keywords
{"Recommender system" / "Cognitive maps"}

Abstract of research paper on Computer and information sciences, author of scientific article — Jose Aguilar, Priscila Valdiviezo-Díaz, Guido Riofrio

Abstract In this paper, we propose a general framework for an intelligent recommender system that extends the concept of a knowledge-based recommender system. The intelligent recommender system exploits knowledge, learns, discovers new information, infers preferences and criticisms, among other things. For that, the framework of an intelligent recommender system is defined by the following components: knowledge representation paradigm, learning methods, and reasoning mechanisms. Additionally, it has five knowledge models about the different aspects that we can consider during a recommendation: users, items, domain, context and criticisms. The mix of the components exploits the knowledge, updates it and infers, among other things. In this work, we implement one intelligent recommender system based on this framework, using Fuzzy Cognitive Maps (FCMs). Next, we test the performance of the intelligent recommender system with specialized criteria linked to the utilization of the knowledge in order to test the versatility and performance of the framework.

Academic research paper on topic "A general framework for intelligent recommender systems"

Applied Computing and Informatics (2016) xxx, xxx-xxx

Saudi Computer Society, King Saud University Applied Computing and Informatics

(http://computer.org.sa)

www.ksu.edu.sa www.sciencedirect.com

ORIGINAL ARTICLE

A general framework for intelligent recommender

systems

Jose Aguilar a'*, Priscila Valdiviezo-Diaz b, Guido Riofrio b

a CEMISID, Departamento de Computación, Universidad de Los Andes, Mérida, Venezuela Prometeo Researcher, UTPL, Loja, Ecuador

b Depto. de Ciencias de la Computation y Electronica, Universidad Tecnica Particular de Loja, Ecuador

Received 20 April 2016; revised 29 July 2016; accepted 25 August 2016

KEYWORDS

Recommender system; Cognitive maps

Abstract In this paper, we propose a general framework for an intelligent recommender system that extends the concept of a knowledge-based recommender system. The intelligent recommender system exploits knowledge, learns, discovers new information, infers preferences and criticisms, among other things. For that, the framework of an intelligent recommender system is defined by the following components: knowledge representation paradigm, learning methods, and reasoning mechanisms. Additionally, it has five knowledge models about the different aspects that we can consider during a recommendation: users, items, domain, context and criticisms. The mix of the components exploits the knowledge, updates it and infers, among other things. In this work, we implement one intelligent recommender system based on this framework, using Fuzzy Cognitive Maps (FCMs). Next, we test the performance of the intelligent recommender system with specialized criteria linked to the utilization of the knowledge in order to test the versatility and performance of the framework.

© 2016 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

1. Introduction

The main goal of the Recommender Systems (RS) was to help users in their decision making. This area proposes the development of RS to provide high-quality recommendations in different contexts. In general, a recommender system is software

Corresponding author. E-mail address: aguilar@ula.ve (J. Aguilar). Peer review under responsibility of King Saud University.

which provides suggestions of items for users [15]. Various techniques for recommendation have been proposed. From the domains such as artificial intelligence, data and semantic mining, information retrieval, approaches of RS have emerged. The RS traditionally have been classified as content-based, collaborative, knowledge-based, and hybrid.

A knowledge-based recommender system only exploits the knowledge naively. We argue that a recommender system has an intelligent behavior if it has the next set of capabilities: knowledge representation, learning capabilities, and reasoning mechanisms. The mix of these capabilities can exploit largely knowledge, update them, and infer them, among other things.

Based on these ideas, in this paper we propose a new type of recommender system, called Intelligent Recommender System

http://dx.doi.org/10.1016/j.aci.2016.08.002

2210-8327 © 2016 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

(IRS), which is an extension of the knowledge-based RS. The IRS considers learning algorithms, knowledge representation mechanisms, and reasoning motors, among other aspects. In this paper, we define an IRS, and describe its components, and the relationships among them, among other things.

An IRS can use any intelligent technique (fuzzy logic, onto-logical approaches, etc.) for its implementation. Additionally, we give an example of its application using the FCMs. The FCMs have been used in different domains [1-3]. The FCMs are based on the Cognitive Maps (CMs) theory, to model systems based on concepts that describe the main characteristics of the modeled system (variables or states of the system), and the causal relationships between them. FCMs are based on the fuzzy logic theory to define their structure and their inference process from a given data input. FCMs have been applied to diverse field such as supporting group-decision, political analysis [2].

In the following section we present some backgrounds about the RS. Section 3 presents the theoretical bases of our approach and then Section 4 presents the IRS framework. Section 5 presents details of the description of the knowledge models of the IRS. Section 6 presents the implementation of IRS using FCMs. Lastly, the next sections present a case study, the utilization of IRS based on FCMs, the experiments, and the analysis of the results.

2. Literature review

In the literature, there are a lot of papers about RS. In this section we present some works, specifically knowledge-based RS or based on intelligent techniques. In [4] the Team Recom-mender Systems (TRS) is presented, which is a knowledge-based RS that helps organizations define the team needed to carry out a task requiring multiple skills. TRS solves two important problems. First, it manages semantic heterogeneity that occurs when the data describing the same entities are represented in different ways. It manages specialization excess of the objects of highest similarity with the user, leaving out consideration of irrelevant information. Additionally, they develop an ontology used to handle the semantic heterogeneity problem. In [5,6] an overview of knowledge-based RS in different domains, such as restaurants, movies is presented. Additionally, they discuss the strengths and weaknesses of knowledge-based and collaborative-filtering RS, and introduce a hybrid RS that combines the two approaches. In their approach, the knowledge-based RS is the bootstrap of the collaborative filtering engine while the data spool is small, and the collaborative filter is the post-filter of the knowledge-based RS.

In [9] a work to define the profile of the customers is presented. Additionally, algorithms for generating personalized buying proposals, based on the collaborative, content-based filtering, and knowledge-based approaches are presented. The profile is created from the user's nature, and evolves according to the events observed. Also, they present some ideas about RS based on the social web and on the consumer buying behavior theory. In [10] a recommendation system of academic papers is defined. The paper proposes user situation awareness and a recommendation system based on fuzzy clustering analysis and fuzzy cognitive maps. They use fuzzy clustering analysis to describe the correlation between lexical semantics, and FCM to define the qualitative distribution of

user interests. The fuzzy clustering analysis introduces the view of the information entropy theory. It carries out a quantitative description of the information in the database, and generates a tree data structure based on this, which is converted into a net data structure used by a FCM for the recommendations. They verify the validity of the algorithm to recommend sites of academic theses. In [12] a recommender system based on fuzzy logic is proposed. This recommender system mines information, in order to provide recommendations to potential buyers about products based on their personal needs. This personalized recommender system driven by fuzzy logic technique mines information about the features of laptop computers, and provides services to potential buyers by recommending optimal products based on their personal needs. They use the Fuzzy Near Compactness concept to measure the similarity between consumer needs and product features. In [13] a Fuzzy linguistic approach to represent the user ratings, and a Fuzzy Multicriteria Decision Making approach, are used to rank the relevant items to a user. Their system handles the uncertainty and fuzziness of human decision making behavior. For that their model of the user ratings considers the subjective, imprecise and vague nature of the user's perceptions and opinions, using fuzzy set paradigm. They test their approach in a Music Recommender system.

In [17] an ontological approach to recommend on-line academic research papers is explored. Research papers are classified using ontology. Recommendation algorithms are used to recommend papers seen by similar people. They create user profiles representing the profiles based on ontology of research issues. Additionally, they use a profile visualization approach, in order to acquire profile feedback. The ontological inference improves the user profiling, and external ontological knowledge is used to bootstrap the recommender system. In [18] an E-Learning RS is presented, based on the use of web mining techniques, to build an agent that can recommend online learning activities or course Web sites, based on the learners' access history, to assist the online learning process. These techniques are integrated on the RS platform. Additionally, they present a Survey about E-Learning RS in the literature. In [19] it is presented a fuzzy-based recommender system for stimulating political participation and collaboration. The recommendation engine is based on a modified fuzzy c-means algorithm, and the Sammon mapping technique is used for visualizing recommendations. Additionally, they develop a framework for ePartici-pation, which allows to analyze different projects and their development, in order to evaluate the citizens' participation and empowerment. In [20] it is explored the role of cognitive decision effort in RS, using indicators about ''information quality" and ''service quality", to examine the performance of the RS according to the user opinion, in a internet book store. They conclude that the information quality of the RS has influence in the consumer shopping decision-making process, and that the e-commerce platform provides recommendations and information necessary, but, the recommendation system has not significant influence in the decision-making effort during the process of consumer's shopping decision-making.

In [21] it is determined that collaborative filter-based RS can be improved by incorporating side information, such as natural language reviews. Additionally, they introduce a model of reviews based on the recurrent neural network, and study its effects on collaborative filtering performance. The recurrent

neural network has the ability to act as a regularizer of the item representations to be recommended. In [22] a personalized recommendation system is introduced based on accurate models which capture the user preferences. They propose a picture-based approach: they use a set of travel related pictures selected by a user, and an individual travel profile is deduced. This is accomplished by mapping those pictures onto seven basic factors, which reflect different travel aspects. This model constitutes the basis of their recommendation algorithm. In [23] a semantic recommendation approach of pedagogical resources is proposed within a learning ecosystem. This approach is based on a voting system, where each member of the ecosystem evaluates the pedagogical resources found in his/her sharing space. In this way, they define a coherent learning ecosystem that promotes collaborative learning, which allows exchanging and sharing knowledge and/or skills. Finally, [15] is a book which present trends, concepts, methodologies, challenges and applications on RS. This book describes the classical methods, as well as novel approaches such as Context-Aware RS and RS in the social web.

3. Recommender systems

RS are techniques used to provide suggestions of items to the user [15]. Formally, a recommendation problem can be defined by a utility function "rec", which predicts the utility of an item i of a set of items I, for a specific user u of a set of users U. Rec is a function R ^ U x I. where R is in the interval [0,1], and it is the utility score of the recommended item. Rec denotes the item's capability to satisfy the needs of the users. In this way, the prediction task of a recommender system is to define this utility score for a given user and item. In general, the data and knowledge available for RS can be very diverse [15]:

• Items are the objects to recommend. They are defined by their complexity and their utility. The complexity of an item is defined by its structure, representation, and dependence of other items. Normally, RS recommend one specific type of item (movies, music, etc.).

• Users of RS are very diverse with respect to their interests, goals, etc. RS personalize the recommendations with the information about the users. This information can be organized in different ways, and the recommendation technique used defines the information in the user model.

• Transactions define the interaction between the users and the RS. Transactions are the data generated during the human-RS interaction. The types of information used for the recommendation generation techniques are very different, for example, the item selected by the user, the description of the context of the query. Also, the transaction can include an explicit feedback of the user. Normally it is called criticity such as its rating of an item.

Various recommendation techniques have been proposed in the literature [9,15] (see Table 1).

The concept of critiquing is very important in our framework. It is based on the idea that users specify their requests as goals not satisfied by the recommended ongoing item [9]. Critiquing-based RS articulates preferences without forcing users to specify concrete values for item properties. The major steps of a critiquing-based RS are [9]:

- Item recommendation. This step selects a set of items r to be presented to the user. In the first critiquing cycle, the selected items are based on a user query q (similarity between the requirements and the candidate items).

- Item reviewing. In this step the user reviews the recommended item, in order to accept the recommendation or select a critique, which restart a new critiquing cycle. If a critique has been chosen, only the items that fulfill the criteria defined in the critique are further taken into account (it reduces the candidate item set).

Our IRS extends the ideas behind the classical RS with this concept [5,6], because it tries to understand users, discover their interests, etc., through the creation of knowledge, reasoning, etc. During this process, we use the notions of ''unit critique" and ''compound critiques". Unit critique operates

Table 1 Taxonomy of RS.

Description

Content-based recommendation approach

Collaborative recommendation approach

Demographic recommendation approach

Utility-based or knowledge-based recommendation approach

Community-based recommendation approach

Hybrid approach

In this approach, the RS recommend items, which are similar to the ones that the user chosen in the past. The similarity is computed according to the characteristics associated with the compared items This approach uses the opinions of users' community, or the information about the past behavior, to predict the items the user will be interested

It recommends items according to the demographic profile of the user. The idea is that each demographic niche has different recommendation needs

This approach recommends items on the basis of the knowledge about how the item characteristics meet the needs and preferences of the users. There are two cases: case-based where the system uses the knowledge, about both the user and the items, to carry out recommendations based on similarity metrics. Constraint-based recommenders use knowledge bases with set of recommendation rules about how to map user requirements with item characteristics This approach is based on the idea that people have more confidence in recommendations from their friends than from anonymous individuals. The popularity of social networks has generated interest in these RS

This approach is the combination of the previous techniques, in order to use the advantages of each one. The combination of the different techniques generates better or more precise recommendations, and exploits better the current information

over one specific property of an item. It defines the change requests of a single item property. For example a unit critique in a PC IRS can infer that the user is interested in a PC with more memory than the normally recommended PC; ''more memory" is a critique over the memory feature. There are critiques that operate over multiple properties, called compound critiques. For example a compound critique in our IRS for the PC domain can infer lower price, faster CPU, and more memory. The compound critiques are very important because they reduce the number of critiquing cycles and allow a faster navigation into the item space.

4. A framework for an intelligent recommender system

In this section, we define the main components of our IRS. The difference with respect to a knowledge-based recommendation approach is that in an IRS there are four main elements. The four main elements are

• A knowledge acquisition mechanism based on learning algorithms.

• A knowledge explicit modeling, which represents all the knowledge necessary to recommend.

• A reasoning mechanism to infer information from the stored knowledge.

• A criticality system based on the automatic inference capability of the IRS.

Our IRS does not only recommend items based on specific knowledge about how the item features meet user needs and preferences, such as knowledge-based recommendation approach; our system uses the rating of the items like the collaborative approach, and it discovers the aspects, interests,

properties that the user would like about the items (criticality system), infers the rating of the items, etc. In this way, it mixes the classical RS idea based on similarity to infer how much the user needs the item [9,15] and the concept of criticality deduced automatically to infer user goals that are not satisfied by the item currently under consideration [22].

The IRS exploits all the knowledge, which is obtained automatically (by learning mechanism), and is modeled appropriately, in order to be used by reasoning mechanisms to infer how much the user needs the item. If the user's goals are satisfied by the current items. Basically, two aspects must be defined in our IRS. The two aspects are its architecture, and the knowledge to model. The general architecture is shown in Fig. 1.

The main component is the semantic knowledge model, which stores the different types of knowledge that it uses for recommending items. This knowledge must be updated because it requires learning mechanisms. Finally, in order to exploit the knowledge, it uses a reasoning mechanism, which is responsible for recommending items using all the knowledge available. Now, listed below are the different components of architecture:

- Knowledge modeling: the main aspect was to define the paradigm of knowledge representation. There are a lot of paradigms [1,2,4,10,12]: ontologies, fuzzy rules, conceptual maps, etc. The main points to select one are the capabilities of representation of all the knowledge available, and the possibility to define reasoning mechanisms with them. In general, our recommendation paradigm requires different types of knowledge such as the user model, the contextual model, the domain model, the item model that have been recommended, and the rest information about the behavior

Figure 1 The IRS architecture.

between the users-items that stores ratings, critiques, etc. (we call it the critical model). Section 3 describes in detail the different types of knowledge of our IRS.

- Knowledge acquisition: this phase is defined to learn about the current situations, etc. There are a lot of approaches (supervised, unsupervised, etc.) [21], but the main point is to define an approach that allows all the knowledge available to be discovered in a given moment. In general, the different machine-learning techniques can be potentially used, according to the context, for example: information online available and real time information. The sources of knowledge are very varied and they can be structured (For example, the transactional database, or data-warehouse, of an organization), semi-structured (For example, xml files) or unstructured (For example, GPS tracking information, audio streams, etc.) data, which represent information about the users, context, etc. The knowledge is acquired through learning mechanisms based on data mining, semantic meaning (web mining, text mining, ontological mining), among other techniques [6,9]. The learning mechanisms to be used depend on the knowledge model and the source of data. For example, if the knowledge model is ontology and the source of data is the web, we can use semantic mining techniques to extract knowledge. Additionally, in this case, data science tasks are very important, in order to explore, clean, transform and reduce the data, before applying the learning techniques to extract the knowledge [15].

- Reasoning mechanism: according to the paradigm choice, there are specific reasoning mechanisms which can be used. The main point is that these mechanisms must allow for inferences. The user needs if the user goals are not satisfied by the current items, considering all the knowledge available. There are three main reasoning mechanisms that can be used: induction, abduction, and deduction. Each one can be used for different tasks, such as analyzing why a recommendation can be given, predicting an item that can be interesting for a user and that means, it allows various types of reasoning: find a ''relaxation" or ''compromise" (for example, What if the user's requirements cannot be fulfilled?), find a ''diagnosis" (for example, Why a certain item is recommended), carry out a ''verification" and ''repara tion-debugging" (for example, What if the user requirements are inconsistent?), and so forth. The idea is to define logical explanations about the different aspects to be considered during a recommendation process, using the available knowledge. There are classic RS responses only to some of them.

- Criticality system: it is an automatic system to infer the user preferences without asking users about them, only using the knowledge stored in the RS [9]. In each cycle of a recommendation session there is a reasoning phase where the aspects are deduced to accept or criticize. These cycles continue until a recommendation is carried out. The learning mechanisms provide the knowledge necessary like a feedback procedure, to deduce the preferences of the users. They provide the knowledge necessary to deduce them to infer the rating of the items, built over a series of recommendation cycles. If a new cycle has been triggered, then the only items that fulfill the criteria defined in the critique are further taken into account (reduction in the candidate items set, in order to reduce the space of search: candidate items). In general, this process continues until the reasoning

mechanism determines that the user can accept the recommendation and exhausted all the possibilities, or terminates the recommendation cycles.

5. Knowledge model in our intelligent recommender system

The main component of our IRS is knowledge. An IRS must exploit all the knowledge available, and to do that, all the advances must be used in different domains (such as information retrieval, data mining), in order to extract this knowledge. Classically, a RS estimates the similarity among the item properties and the user preferences, or estimates the ratings for the items that have not been seen by a user. IRS exploits knowledge to infer the rating of the items, to infer the preferences of the users, and to match the item properties with the user preferences. For that, it defines different types of knowledge:

- An extended user profile (including its opinions, critiques, etc.): Normally, the information that is modeled is about his/her preferences, his/her personal information (age, gender, profession, and education), etc. [8,17,22]. Here, we propose to extend with new information about his/her opinions, critiques; his/her relationship with other users (his/her friend groups, etc.). The user model profiles, with the rest of semantic model of IRS must infer the preferences and needs of the users.

- An extended item profile: it represents a full description of an item based on four dimensions: (i) the general description of the product (name, branches that produce the items, etc.); (ii) the functional information about the item (its functions, etc.), (iii) the structural information about the item (its components, the relationships among them, dependence, etc.) and finally, (iv) the operational information about the item (how can be used, etc.). Some of this information can be learned, or inferred from the information stored.

- Context and domain knowledge: it is very important to know the domain where the items will be used, the context where the individual is going to make the decision, etc. The contextual knowledge is all the knowledge that explains a given situation [15,20]. The domain knowledge is the knowledge of an area of a discipline, a human activity, etc. [15,23]. This type of knowledge is not currently considered, or the RS must be customized to be used in a specific context, changing part of its structure. Here, we propose to model explicitly these aspects.

- A critical knowledge: This is a knowledge that must be discovered, based on the transactions over the RS (relations between users and items). A transaction may describe the context of the recommendation, may refer the item chosen, and may include the feedback the user has provided, among other things [9,15]. Normally, this knowledge describes the behavior-based knowledge, and must be discovered using machine-learning technique in order to obtain interesting patterns. This knowledge normally represents the interests of the users. In our IRS, this knowledge must be learned or discovered.

In our case, the similarity among the item properties and the user preferences, or the estimation of the ratings for the items, is the result of an inference process about the knowledge

model used (for example, if the knowledge model is ontology, then we can apply an ontological reasoning).

In general, our IRS must use the knowledge available in a given moment without degrading its performance. That is, depending on the application domain and the usage scenario, may be only parts of the previous knowledge are available, and the IRS must continue to carry out recommendations. In this way, the definitions of the different types of knowledge are very important, but our IRS is very robust in order to work with the knowledge available. The main difference with the Knowledge-based RS is that this requires knowledge engineering, and an IRS uses learning mechanisms from different sources: product databases, social media, etc. Our system can exploit the different techniques of machine learning, semantic mining, in order to build the knowledge that is need. For example, if the knowledge model is ontology, then we can apply the merging and alignment of ontologies (two types of semantic mining techniques) to enrich the model [24]. In this way, our IRS avoids the problem of acquisition of knowledge of the knowledge-based RS. Additionally, it has not the ramp-up or cold-start problem because IRS can draw inferences about the users or items, even when it has not yet gathered sufficient information. Its recommendations do not depend only on the user ratings (case of collaborative RS), or on gathering information about a particular user or item (case of content-based RS), because it infers this information. We dedicate the rest of this section to explaining this knowledge.

5.1. Extended user profile

In this model, the knowledge about the users is defined. This model must allow for responding to queries such as (see Table 2): Who is the person? What personal data are available? How is his/her performance? What projects were assigned? What tasks have been carried out? What skills has he/she? Table 2 defines user profiles of IRS.

Some of this information is obtained by asking the users, but some of it is obtained by learning approaches. For example, the networks of friends can be obtained using social net-

work analysis [22], or the ideological trend using semantic mining mechanisms.

5.2. Extended item profile

In this model, we define the set of features that represent an item. There are different types of characteristics that can describe an item:

• General or objective characteristics, such as: functional information, structural information, descriptive information.

• Specific or subjective characteristics, which can be: level of use, score or rating, utility.

These characteristics can be represented by abstract concepts or attributes that can be described by their properties, which can be the following:

• Intrinsic or statics (own).

• Dynamics (changing properties).

In our case, we are going to characterize each item by four (4) dimensions [11,14]:

• Descriptive: information about its intrinsic characteristics. Answer the question: What is it?

• Structural: elements that compose the item (entities, attributes, processes), relationships and constraints between them, etc. It reflects the invariant structure. Answer the question: What is its composition?

• Functional: specific functions of the items. Answer the question: What does it do or allow?

• Operational: how it can be used, interacted with it, or integrate with other systems. Answer the question: Can it be used/reused?

Table 3 defines the different attributes that describe an item in our system (the first letter next to the item represents if the attribute is static (I) or dynamic (D), and the second if the attribute is Descriptive (D), Structural (S), Functional (F) or Operational (O)):

Some of the subjective attributes are defined here, but they really belong to critical knowledge, because of a knowledge based on the judgment of the users. In our case, this judgment is inferred using reasoning mechanisms over the knowledge stored and the transactional information of the users.

5.3. Context and domain knowledge

Two additional knowledge, to give more semantic information to our IRS, are as follows:

- Knowledge about the context: in this case, it is necessary to catch all information specific to the context in which the recommendation is given (see Table 4).

- Domain knowledge: It is the valid knowledge used in a given area of human activity, in a specialized discipline, etc. Normally, the experts use and develop their own domain knowledge. In IRS, it refers to the specific area/domain where it will be used.

Table 2 User profile in IRS.

Personal data Name, address, ID, Sex Age,

relationship status, personality

Physical features Size, weight, physical defects

User type Student, research, etc.

Languages

Some tastes, preferences We can ask explicitly to the user,

but some of this information can be

inferred

Education Level of the education: PhD, etc.

Occupation

Socio-cultural aspects Behavior patterns, needs, cultural

behavior

Economics aspects Income, buying habits, etc.

Political aspects Ideological trend, etc.

Most influential sectors Networks of friends, etc.

Technological skills

Intellectual capabilities

Projects developed

Positions held

5.4. Critical model

The user's behavior is modeled in order to determine his/her preferences, opinions, trends, etc. [10]. The model represents the opinions, critiques (see Table 5). Particularly, the user opinion (critique) about the items recommended is stored, as a result of an inference process, a learning process, a calculation (estimation), or asking the users elicitation. This model is navigated (it is the space of the critique to the products) in order to be used during the recommendation process and represents the discovered preferences. In this way, the IRS can recommend items that satisfy the ongoing critique, are similar to the previous recommendation, satisfy the majority of the previous critiques, etc. Our system infers the user preferences and works through a cycle of recommendations to build it.

6. Example of specification of our IRS in an intelligent technique

One of the advantages of our system is that its implementation can be carried out using any intelligent techniques. The intelligent paradigm (ontologies, based on the evolutionary process, etc.) can be chosen according to the source of information, tools available, etc. In this Section, we give one example of implementation based on the FCM.

6.1. Fuzzy cognitive maps

Cognitive Maps (CMs) are directed graphs that model a real system as a set of concepts and causal relationships between them [1-3]. Each concept is a node in the graph, and represents a characteristic/state of the system. The causal relationships are positive or negative signs, with specific weights. The value of a node is the activation degree of a concept in a given time. This value is defined as the sum of all values of the concepts at the preceding state and the incoming edges.

Specifically, a CM is defined by n concepts (mathematically is a n state vector A), and a n * n weighted matrix E. Each element Eij of the matrix is the value of the weight between concepts Ci and Cj (it measures how much Ci causes Cj). The activation level At for each concept Ct is calculated by the following:

A' = /(¿j) + Aold (1)

Aj is the activation level of Ct at time t + 1, Aj is the activation level of Cj at time t, Aiold is the activation level of Ci at time t, and f is a threshold function. In this way, the new state vector A is defined by the change in the activation level of one concept due to the other concepts. A CM starts with A0 = S0, and repeatedly calculates Ai, until the system convergences (for example, when A"ew = Aold) or other stopping criteria. The last value of the state vector A is the response to the ''what if' question [1-3].

CMs have been extended by Kosko, considering fuzzy logic and neural network theories [2]. This approach has been called FCM, and often is defined by concepts that can be defined as fuzzy sets, and causal relationships between the concepts that can be defined by fuzzy implications. Also, the threshold function of the weighted sums can be fuzzy. There are different learning algorithms for a FCM in the literature. There are two examples of learning algorithms. One is based on the opinion of the experts and another is based on the historical data. In [1,3] an exhaustive presentation of different learning algorithms is given.

6.2. IRS based on FCM

We propose a FCM based on two levels, the first level contains the concepts inferred, which represent the knowledge about the critiques of the products, the preferences of the users, the recommendations, among other concepts. The second level represents the description of the current situation: the information about the items and users, the context, etc. They are defined according to our models presented in Section 5. In this way, the first level is the knowledge generated by our system, which can be used in different ways: to recommend, to discover infor-

Table 3 Item profile in IRS.

General attributes Subjective attributes

- Item ID (I, D) - Level of use (D, O)

- Name (I. D) - Utility (D, o)

- Type of item (I, D) - Punctuation/Qualification (D,

- Description (I, D) O)

- Localization (I, D) - Type of Problems where can

- Author of the item (I, D) be used (D, O)

- Date of elaboration (I, D) - Reusability (D, O)

- Provider (I, D) - Extensibility (D, O)

- Dimension(I,D) - Interoperability (D, O)

- Version (I, D)

- Format (I, D)

- Components (I, S)

- Relationship between

components (I, S)

- Constrains (I, S)

- Technical Requirements

(I, S)

- Goal (I, F)

- Requirement cover (I, F)

Table 4 Contextual Knowledge in IRS.

- Environmental conditions

- Space characteristics

- Time: current and historical

- Activities

- Resources and devices

- State

- Peoples

Table 5 Critical model in IRS.

Attribute Critical type

Behavior patterns Trends Opinions Preferences Punctuation Unit and compound critiques Directional or replacement Directional or replacement Unit and compound critiques Unit and compound critiques

Figure 2 Our multilevel FCM.

mation about the users, products, etc. The second level represents all the knowledge available in a given moment (see Fig. 2, the FCM developed with the FCM Design tool [7]):

(a) First level:

In this level, there are concepts inferred from the concepts of the second level or from concepts on the same level. Mainly, they represent concepts linked to the recommendations (our system can recommend different things, for example interesting items, similar items), but additionally, other types of concepts about information inferred about the products or items (for example, users opinions, or punctuation of the item). In general, the attributes of the critical model belong to this level, and they reduce the space of candidate items. The concepts on this level are defined in Table 6.

(b) Second level:

In this level, the concepts represent the different attributes of the user and item profiles, extended by the knowledge about the context and domain (see Section 5). They are grouped according to the knowledge that is represented (users, products, etc.).

The relationship between the concepts is according to the causal relation between them. They determine the dependent relationships among the concepts. In this case, the learning process adapts the relationships among the concepts: deleting, updating or adding. In this way, our FCM is reconfigurable according to the quality of recommendations given.

Fig. 2 shows the initial FCM. In the first level we can see the concepts inferred listed in Table 6, as the preferences, usability, Use Level (''nivel the uso" in Spanish), etc. In the second level the concepts are defined in Section 5 about users profile defined in Table 2 (Languages (''idioma" in Spanish), Sex, User type (''tipo de usuario" in Spanish), etc.), and item

Table 6 Inferred concepts.

Item of interest It refers to the usefulness of the item for the user Punctuation It refers to score or rating that a user provides to the item

Item preferred It is the priority that a user has for the item by user

Preferences It refers to user-defined preferences based on its

profile information Use Level Indicator of the user interaction with the item

Usability It refers to the ease with which user can use the

Interoperability It concerns whether the item can be integrated

into different systems or platforms Similar user It concerns whether there are other users with the same characteristics as the active user

profile defined in Table 3 (Name (''Nombre" in Spanish), Localization, etc.), among others. The initial values of the arcs have been defined by a group of experts on learning resources, and they have been adapted using the learning mechanisms of the FCM Designer Tool [7].

Particularly, we can see that in FCM the similarity is inferred due to the relationships between the concepts of the item profile and users profile with the concepts inferred (in special, with the ''Item of interest" and ''Item preferred by user" concepts). The recommendation and preference concepts are examples of elements of the critical model included in the FCM model which are also inferred. In this way, all the elements of the different models are included naturally in the FCM.

7. Case study

In this case study, we define an IRS of learning resources. In Section 5 we have presented the general knowledge used by

Table 7 User profile in IRS and IMS specifications.

Attribute Standard

Personal data (name, id, sex, etc.) IMS-LIP

Physical Features (physical disability)

User type IMS-LIP

Languages

Some tastes, preferences (likes) IMS-LIP

Education (academic degree) IMS-LIP

Occupation

Socio-cultural aspects (behavior, behavior IMS-LIP

patterns)

Economics aspects

Political aspects (context)

Most influential sectors (networks of friends) IMS -

Technological skills

Intellectual capabilities(score intellectual) IMS -

Projects developed IMS-LIP

Positions held (employment performed)

IRS. Now we need to customize the IRS based on FCM for this domain.

7.1. Analysis of the user profile of IRS of learning resources

We use the IMS standard, which describes the general information to be collected about a student or a producer of learning content, to customize the user profile [8]. The IMS Learner Information Package (IMS LIP) specification defines the interoperability of internet-based Learner Information systems, with other Internet systems used by learning processes. Our IRS must consider this information. The main aspects to consider from the IMS standard are affiliations, competencies, goals, identifications, interests, qualifications, certifications, accessibilities, activities, and relationships.

We verify whether this information is included in the user profile defined previously (see Table 7). In Table 7 we can see that our profile contains the main attributes of the IMS LIP specification and of the Reusable Definition of Competency or Educational Objective (RDCEO) specification. The RDCEO allows defining competencies in the learning domain. Any additional information to be included in our user profile is specified like part of the domain knowledge of the IRS. In our case, initially we do not add more information. Finally, our FCM will use the same set of concepts, see Table 7.

7.2. Analysis of the item profile of IRS of learning resources

Based on the same idea of the previous Section, we need to adapt the item's profile to the context of application of the IRS. In this case, we need to compare the items to be recommended for a standard in the domain of the Learning Resource (LR). Specifically, we use the LOM-IEEE standard [11]. The LOM-IEEE defines 9 categories: (a) General category (GC): general information about the LR, (b) Lifecycle Category (LC): metadata related to the history and current status of the LR, (c) Metadata Category: information about the metadata itself, (d) Technical category (TC): metadata about the technical requirements of the LR, (e) Educational category: metadata for educational uses of LR, (f) Rights Category: metadata about property rights and intellectual material, (g) Relation Category: metadata used to establish relationships between the LRs, (h) Annotation Category: annotations and comments on the LR, and (i) Classification Category: LR classification like taxonomies.

The general category of the LOOM-IEEE standard includes nine types of metadata, such as: Identifier (ID), Title, Language, among others. The technical category includes Format, Size, Location, etc. Educational category groups the metadata: Interactivity Type (''Active", ''Expositive"), LR Type (exercise, simulation, questionnaire, slide, experiments, lecture, etc.), Interactivity Level (low, high, etc.), Context, Difficulty, Typical Learning Time, among others. For the rest, see [11].

Table 8 Item profile in IRS and in LOM.

General attributes Specific attributes LOM

- Item ID (I, D) - Identifier (GC)

- Name (I. D) - Title - Title (GC)

- Type of item (I, D) - Catalog entry (GC)

- Description (I, D) - Description (GC)

- Localization (I, D) - Location (TC)

- Author of the item (I, D) - Manager/Artist/Composer

- Date of elaboration (I, D) - Year of manufacture/production

- Provider (I, D)

- Dimension (size) (I,D) - Longitude, latitude, Size

- Version (I, D) - Model, language - Version (LC)

- Format (I, D) - Language (GC)

- Components (I, S) - Structure (CC)

- Relationship between components (I, S) - Relation category - Rights category

- Constrains (I, S) - Requirements, Installation Remarks

- Technical Requirements (platform, - e-learning platform

- types problems) (I, S)

- Goal (i, F)

- Requirements cover (topic, career, area) (I, F) - Course Area, Certification, subject - Educational category

Now, we verify whether this information is included in our item profile, and the rest is included as part of the domain knowledge model (see Table 8).

We can see that the IRS covers the main attributes of the standard LOM, and any additional attribute of this standard to be included is part of the domain knowledge. In our case, we do not add additional knowledge.

In the Sections 7.1 and 7.2 we show two examples of implementation of the user and item profiles of our IRS model. In this study case, we have used two standards of the domain of learning to define these profiles: the LOM standard for the item profiles, and the IMS standards for the user profile. We confirm that the general user and item profiles defined in Section 5 are part of these standards. In this way, the FCM implementation of the IRS defined in Section 6 is correct.

8. Results and discussion

8.1. Experiments with the IRS based on FCM

In this section, we evaluate the capabilities of our FCM-based IRS in terms of the learning methods and reasoning mechanisms. In the previous sections, we have defined the knowledge models used by the FCM-based IRS. The knowledge model based on FCM is defined by the concepts that represent the information about the users, items, domain, context and criticisms (see Sections 5-7). In this section, we evaluate the learning and reasoning capabilities.

8.1.1. Learning process

This process is based on two steps:

1. Determination of the initial values of the weights: it includes the assignment of initial values (weights) of the relationships between concepts by human experts. The assignment of the relationship weights is obtained from the mean weights established by several experts (see [1,3] for more details of this process).

2. Adjustments of weights based on initial experiments: using the FCM generated in step one, we have taken a sample of real examples in which we knew in advance for each case, the students and the learning resources to recommend. We have run our recommender system based on FCM, and its results are compared with the real cases to determine whether the weights given by the experts provide the desired output, or whether it is necessary adjust them, in order to obtain the desired output (see the details of this procedure in [1,3]).

8.1.2. Inference process

Our RS can be used in two cases: to recommend an LR for a given student, and to infer quality aspects in an LR. In this section, we are going to test both capabilities. To recommend LR, we only need infer concepts linked to preferences, and to infer the quality of a learning resource, we only need infer concepts linked to the characteristics of the learning resources (see Table 9).

Additionally, the concepts about student and item profiles to be used by the FCM, for the recommendation or inference of the quality, are reduced (see Tables 10 and 11), considering

Table 9 Inferred concepts in each case.

Item of interest To recommend To infer the quality of a

learning resources learning resource

Punctuation X

Item preferred X

by user

Preferences X

Use Level X

Usability X

Interoperability X

Similar user X

only those important for the recommendation, or the quality of a LR. According to Tables 10 and 11, for the recommendation, the concepts linked to detailed information of an LR and a student are not relevant to define preferences (they are deleted); and for the inference of the quality, we delete the concepts about the detailed information of the student, and the information is not relevant for an LR, to determine their usability and interoperability.

If we like to study the preferences (first case), then we need to keep all the student concepts, and some of the concepts about learning resources (such as item ID, type, goal). The attributes related to the educational characteristics of the item are important in this case (see Table 11). To define the preferences of a student we need all of its concepts (see Table 10), but to infer the quality of the learning resources only the student concepts about its physical characteristics (sex, disability, etc.) are of interest (see Table 10).

8.1.2.1. To recommend learning resources. Now, we present an example of the FCM to recommend an LR. In this case, the IRS infers whether a student is interested or not in an LR, whether it has a preference for it, whether the LR is useful for the student, etc. For this experiment, an example of the values of the student attributes is given in Table 12, and of the LR in Table 13.

These values of the concepts are defined at the beginning by the users, according to the relationship between the user concepts (architecture student) and the specific educational resource (Construction). For example, the goal of the item is according to the profile of the student (for this reason, the concept about ''requirements cover" is 0.8), and the learning resource can be used in different platforms; for this reason, the value of the ''constraints" concept is 0.8. In this way, we define the initial values of the different concepts of the FCM, and the FCM starts to iterate until its convergence. We have used the FCM Designer Tool [7]. We have carried out several recommendations for the same student and different learning resources, and these results are shown in Table 14.

The FCM iterates, until the values of the inferred concepts are stabilized. The first column corresponds to the results of the pair LR and student, of Tables 12 and 13. In this case, the FCM iterates 10 times. In that example, we can see that the values obtained for the inferred concepts indicate that the LR is recommended (has a good punctuation and preference), it should be useful for the student, there are similar students that have used this LR, and the student will have a good level of interaction with the LR. We can carry out a similar interpretation of the concepts, for the rest of results.

Table 10 Concepts about students without impacts in each case.

Attribute To recommend learning resources To infer the quality of a learning resource

Personal data (name, id, sex, etc.)

Physical Features (physical disability)

User type X

Languages

Some tastes, preferences (likes) X

Education (academic degree) X

Occupation X

Socio-cultural aspects (behavior, behavior patterns) X

Economics aspects X

Political aspects (context) X

Most influential sectors (networks of friends) X

Technological skills X

Intellectual capabilities (score intellectual) X

Projects developed X

Positions held (employment performed) X

Table 11 Concepts about LR without impacts in each case.

General attributes To recommend learning resources To infer the quality of a learning resource

Item ID (I, D) X

Name (I. D) X

Type of item (I, D)

Description (I, D) X

Localization (I, D) X

Author of the item (I, D) X

Date of elaboration (I, D) X

Provider (I, D) X

Dimension (size) (I,D) X

Version (I, D)

Format (I, d) X

Components (I, S) X

Relationship between components (I, S) X

Constrains (I, S) X

Technical Requirements (platform, types problems) (I, S)

Goal (I, F) X

Requirements cover (topic, career, area) (I, F) X

Table 12 Input value of the student concepts.

Attribute Student Concept value

Personal data (name, id, sex, etc.) Vannesa Alarcon, ID 3474342, female, etc. 1

Physical Features (physical disability) Without physical disability 0.5

User type Genius 1

Languages Spanish, French, English 1

Some tastes, preferences (likes) Painting, music, reading 1

Education (academic degree) Student of Architecture 1

Occupation Student 0.5

Socio-cultural aspects (behavior, behavior patterns) Dance, walk, fans of social network 0.7

Economics aspects Middle class 0.3

Political aspects (context) Unknown 0

Most influential sectors (networks of friends) Her parents, her friends of study 0.6

Technological skills Programmer,

Intellectual capabilities (score intellectual) Analytical, thinker 0.5

Projects developed Design of a church 0.6

Positions held (employment performed) Nothing 0

Table 13 Input value of the learning resource concepts.

General attributes Educational resource Value

Name Construction 1

Type of item Video, book and slides 1

Description This resource describes a 1

course about constructions of

churches

Constrains Windows, Linux, 0.8

Technical Requirements 3K Ram, VLC tool, players, 0.5

(platform, types OpenOffice, graphic processor

problems)

Goal Bases about the construction of 0.6

churches

Requirements cover Technical drawing, calculation, 0.8

(topic, career, area) etc.

Table 15 Student Concepts in the FCM to infer the characteristics of usability and interoperability of a learning resource.

Attribute Student Concept

Personal data (name, id, Vannesa Alarcon, ID 1

sex, etc.) 3474342, female, etc.

Physical Features Without physical 0.5

(physical disability) disability

Languages Spanish, French, English 1

Table 16 Learning resource concepts in the FCM to infer the

characteristics of usability and interoperability of a learning

resource.

General attributes Educational resource Value

Type of item Video, book and slides 1

Date of elaboration Dec 2016 1

Provider Heber Hoeger 1

Dimension (size) 10 Gb 1

Version First version 1

Format Pdf, ppt 1

Components One part 1

Relationship between Sequential 1

components

Constrains Windows and Linux 0.8

Technical Requirements 3K Ram, VLC tool, players, 0.5

(platform, types OpenOffice, graphic processor

problems)

8.1.2.2. To infer the quality of a learning resource. In this case, we test the FCM to infer the characteristics of usability and interoperability. An example of the input of the pair LR and student is given in Tables 15 and 16.

This FCM can predict the characteristics of usability and interoperability of a LR. This is an interesting use of the FCM, in order to determine whether a LR can be interesting for a given course (that can be very important in courses where there are students with physical disability). For example, for the case of the Tables 15 and 16, the first column in Table 17 describes the inference. In this case, the FCM infers a high usability and interoperability of the LR. Particularly, the FCM infers the LR is easy to use for the students with the profile of Table 15 (usability), and can be integrated into other platforms (interoperability).

8.2. General result analysis

In the Section 8.1, we have evaluated the learning and reasoning capabilities of the FCM-based IRS. Particularly, the reasoning capabilities allow different types of inferences not only to recommend but also to infer other types of information such as the quality of the learning resource. With the models and results obtained in Sections 6,7 and 8.1, we have evaluated the entire FCM-based RS as an IRS.

Now, we compare the FCM-based IRS approach with other knowledge-based RS. In order to determine the comparison criteria, we define some questions about how the knowledge is managed by the different approaches in the context of learning resources. These questions are as follows: How can our IRS be evaluated? How can we measure the quality of the IRS? In general, we are going to use metrics that try to determine how the knowledge in our IRS is exploited. For this reason, we propose the next criteria [9,15]:

Table 17 Results of some recommendations about the quality of the learning resources.

Attribute inferred Inference 1 Inference 2 Inference 3

Usability 0.89545 0.45633 0.09435

Interoperability 0.91322 0.34613 0.12322

- Validity: if the RS gives explanations to allow users validate the recommendation. For example, ''I recommend this house because you have four children. Because of the number of children, I cannot recommend a small apartment".

- Comprehension: determines whether the explanations/recommendations are based on a deep knowledge about the domain of interest.

- Efficiency: determines whether the RS reduces the decision-making effort of the users. In our case, the interesting measure of efficiency is the cognitive effort.

- Persuasiveness: it determines whether the IRS can change the behavior of the users. In this sense, recommendations persuasively aim to change the user's behavior.

Table 14 Results of some recommendations of learning resources.

Attribute inferred Recomm. 1 Recomm. 2 Recomm. 3 Recomm. 4

Item of interest 0.93422 0.93242 0.01451 0.64154

Punctuation 0.80453 0.89453 0.01385 0.45484

Preferences 0.88349 0.95136 0.10340 0.51858

Use Level 0.95364 0.86465 0.04256 0.39552

Similar user 0.83544 0.82345 0.12034 0.54321

Table 18 Comparison with other approaches.

Criteria [10] [14] [16] Our IRS

Validity Not Not Not Yes

Comprehension Yes Yes Yes Yes

Efficiency Not Yes Yes Not

Persuasiveness Maybe Maybe Maybe Yes

Effectiveness Sometime Sometime Sometime Yes

Transparency Yes Not Not Yes

- Effectiveness: determines whether the recommendations help users for making high-quality decisions.

- Transparency: if the RS provides information at the user to understand the reasoning which it uses to recommend.

In this section we compare our IRS with other works, using these metrics to determine how the knowledge is exploited in each RS (see Table 18).

In general the IRS gives good results. It gives a good explanation of its recommendations and advices based on its inference process (see Section 7.1) and its knowledge model about the domain of interest (see Sections 5-7). Due to that this information can persuade to the users in a transparent way in order to help them in their processes of making decisions. The only criterion where our IRS is not good is in efficiency, because the user needs to customize the input and to interpret the inference process of the IRS. The FCM-based IRS increases the decision-making effort of the users that is due to the technique used to test our IRS (FCM), and not to the IRS framework, because the user needs to know the FCM theory to understand the process followed by the RS.

The works [14,16] do not give explanations about their recommendations because their recommendations are not based on a knowledge model. The work presented in [10] has the same problem of efficiency as our work because it is based on the same technique, but it has not a learning process and the users must update manually the cognitive map (transparency). Additionally, the knowledge model of [10] is very simple. It does not consider aspect as the context and the criticisms, and its user and item profiles are very simple. These limitations in the knowledge model or the inexistence of the knowledge model in [14,16], reduce their persuasive capabilities and for the same reasons in some cases the recommendations do not help users for making high-quality decisions. We like to highlight that [10] uses the same technique that our example of IRS, but its knowledge representation is very simple and has not a learning mechanism. Our IRS framework introduces some aspects such as the knowledge representation, the learning and the reasoning mechanisms, all of which improve the recommendation process.

In general, the IRS framework is the only one that covers the different criteria very well and the rest have problems, because they are ambiguous in the inference process (transparency), which impacts in the persuasion to users. Additionally, our IRS can be used in different contexts.

The evaluation of Section 8.1 allows determining the advantage of the different capabilities of an IRS (learn, reason, etc.), and the comparison in this section allows determining the advantage about how the knowledge is managed in our IRS.

These different comparisons and evaluations allow having an entire idea of the interest of an approach as the IRS.

9. Conclusions

In this paper, we have presented a new type of RS, called IRS. We have argued that an IRS framework improves the quality of the recommendations due to its knowledge representation, and its learning and reasoning mechanisms. The classical RS have not all these characteristics simultaneously; for example, [10] uses the same technique that we have used to implement our example of the IRS, but in [10] it is defined as a knowledge-based RS. In the Section 8.2, we have determined that our system has better behavior due to these characteristics used simultaneously.

Our IRS allows navigating over knowledge in order to exploit it. Additionally, our IRS can infer interesting information, what is not traditionally defined by classic RS, about user preferences, critiques, etc. Our IRS can use the knowledge in different ways (to explain, to persuade, to predict etc.), for different things (to infer or to recommend), in a transparent way. Our IRS avoids some of the classical drawbacks in RS: it does not have a ramp-up problem since its recommendations do not depend on a base of user ratings, it does not require a knowledge engineering process, and it does not have to gather information about a particular user because its similarity judgments are independent of individual tastes, etc. It is immune to statistical anomalies because its recommendations are based on knowledge, which is updated by learning mechanisms. The IRS framework has a well-defined knowledge model, which considers knowledge about the users, items, domain, context and criticisms. The learning mechanism allows update this knowledge that initially is defined using the context and domain information, and is used in the first inferences. Then IRS exploits the knowledge extracted by the learning mechanism particularly about the users to improve its performance.

The implementation of the IRS using FCM shows the versatility of the framework. The main aspects that must guarantee the intelligent techniques to be used to implement IRS, are the capabilities of reasoning, representation of diverse knowledge and learning. Particularly, the learning capability of the FCM is used easily by the RS, and the reasoning is defined by the iterative process implicit in the FCM.

The criteria about the management of knowledge defined in Section 8.2, are well achieved by our approach. Our approach obtains two types of results: qualitative (cognitive map) and quantitative (inferences). With these results it meets the criteria. [14,16] do not reach the validity and efficiency criteria because they do not explain how the results are obtained and

the students/teachers must reason about why to use the learning resources recommended (the validity is not reached). Also, [10,14,16] are not persuasive and effective due to the knowledge model used which only cover partial information about the students or learning resources or to the inexistence of a knowledge model. This does not allow convincing a person, because it is not possible to define arguments based on knowledge.

Further works must test the IRS in other types of problems with other intelligent techniques, like the ontologies, the fuzzy logic, in cases which require large knowledge about the domain and context. Also, more experiments are necessary to determine other uses of the knowledge stored by the IRS in the context of RS (e.g., for diagnosis).

Acknowledgment

Dr. Aguilar has been partially supported by the Prometeo Project of the Ministry of Higher Education, Science, Technology and Innovation of the Republic of Ecuador.

References

[1] J. Aguilar, Different dynamic causal relationship approaches for cognitive maps, Appl. Soft Comput. 13 (1) (2013) 271-282, Elsevier.

[2] J. Aguilar, A Survey about fuzzy cognitive maps papers, Int. J. Comput. Cognit. 3 (2) (2005) 27-33, Yang's Scientific Research Institute.

[3] J. Aguilar, Dynamic random fuzzy cognitive maps, Revista Computación y Sistemas, Revista Iberoamericana de Computación 7 (2004) 260-271.

[4] M. Ayub, A. Cian, M. Caliusco, E. Reynares, Developing an ontology-based team recommender system using EDON method: an experience report, SADIO: Electron. J. of Inform. Operat. Res. 13 (2014) 1-13.

[5] R. Burke, Integrating Knowledge-based and Collaborative-filtering Recommender Systems, AAAI Technical Report WS-99-01, pp. 69-72.

[6] R. Burke, Knowledge-based recommender systems, in: A. Kent (Ed.), Encyclopedia of Library and Information Systems, 69 (32), Marcel Dekker Publisher, 2000.

[7] J. Contreras, J. Aguilar, The FCM designer tool, in: M. Glykas (Ed.), Fuzzy Cognitive Maps: Advances in Theory, Methodologies, Tools and Application, Springer, 2010, pp. 7188.

[8] IMS Global Learning, <http://www.imsglobal.org/cc/index. htm >.

[9] D. Jannach, M. Zanker, A. Felfernig, G. Friedrich, Recommender Systems: An Introduction, Cambridge University Press, New York, 2011.

[10] W. Liu, L. Gao, Recommendation system based on fuzzy cognitive map, J. Multimedia 9 (7) (2014) 970-976.

[11] LOM-IEEE standard. <http://ieeeltsc.org/wg12LOM/>.

[12] B. Ojokoh, M. Omisore, O. Samuel, T. Ogunniyi, A fuzzy logic based personalized recommender system, Int. J. Comput. Sci. Inform. Technol. Secur. (IJCSITS) 2 (5) (2012) 1008-1015.

[13] K. Palanivel, R. Sivakumar, Fuzzy multicriteria decision-making approach for Collaborative recommender systems, Int. J. Comput. Theory Eng. 2 (1) (2010) 57-63.

[14] A. Rodriguez, J. Gago, L. Rifon, R. Rodriguez, A recommender system for non-traditional educational resources: a semantic approach, J. Univ. Comput. Sci. 21 (2015) 306-325.

[15] F. Ricci, L. Rokach, B. Shapira, P. Kantor (Eds.), Recommender Systems Handbook: A Complete Guide for Research Scientists and Practitioners, Springer, New York, 2011.

[16] L. Rifon, A. Canas, V. Roris, J. Gago, M. Iglesias, A recommender system for educational resources in specific learning contexts, in: 8th International Conference on Computer Science & Education (ICCSE), 2013, pp. 371-376.

[17] E. Stuart, N. Shadbolt, D. De Roure, Ontological user profiling in recommender systems, ACM Transact. Inform. Syst. 22 (1) (2004) 54-88.

[18] R. Sikka, A. Dhankhar, C. Rana, A survey paper on e-learning recommender system, Int. J. Comput. Appl. 47 (2012) 27-30.

[19] L. Teran, SmartParticipation: A Fuzzy-Based Recommender System for Political Community-Building, Springer, Switzerland, 2014.

[20] C. Tsai, H. Chuang, The role of cognitive decision effort in electronic commerce recommendation system, Int. Schol. Sci. Res. Innovat. 5 (10) (2011) 36-40.

[21] A. Almahairi, K. Kastner, K. Cho, A. Courville, Learning distributed representations from reviews for collaborative filtering, in: 9th ACM Conference on Recommender Systems (RecSys '15), 2015, pp. 147-154.

[22] J. Neidhardt, R. Schuster, L. Seyfang, H. Werthner, Eliciting the users' unknown preferences, in: 8th ACM Conference on Recommender systems (RecSys '14), 2014, pp. 309-312.

[23] C. Mediani, M.H. Abel, Semantic recommendation of pedagogical resources within learning ecosystems, in: 2016 International Conference on Industrial Informatics and Computer Systems (CIICS), 2016, pp. 1-5.

[24] M. Mendonca, N. Perozo, J. Aguilar, An approach for multiple combination of ontologies based on the ants colony optimization algorithm, in: Asia-Pacific Conference on Computer Aided System Engineering (APCASE), 2015, pp. 140-145.