迁移:样本数据、特征、参数、关系
2.2. Knowledge Transfer The literature on knowledge transfer can be generally categorized into four main approaches based on the type of knowledge they transfer [21]:

  • • Instance Transfer: Methods placing in this approach, mainly aim for weighting and transforming labeled instances into the target domain. Standard supervised machine learning models can be applied on transferred samples afterward.
  • • Feature Representation Transfer: The core idea of this category’s models is about finding a common representation of both source and target domain that decreases the distance between domains while keeping their classes discernible.
  • • Parameter Transfer: The basic assumption is that the source and target domains share some parameters or prior distributions of the models’ hyperparameters. These methods focus on the transformation of prior knowledge and parameters between domains.
  • • Relational Transfer: The knowledge to be transferred is the relationship among the data. Mapping of relational knowledge between the source domain and target domains is built. Both domains should be relational.

Authors in [12] proposed an instance-based transfer model in HAR domain that interprets the data of source domain as pseudo training data with respect to their similarity measure to the target domain samples. These pseudo data then will be fed into supervised learning algorithms for training the classifier.
Quite recently, another Cross-Domain Activity Recognition translation framework was proposed by researchers in [28]. It first obtains pseudo labels for the target domain using the majority voting technique. Then, it transforms both domains into common subspaces considering intra-class correlations. This model which is working in a semi-supervised manner obtains labels of target domain via the second annotation.
Transfer Component Analysis (TCA) is a domain adaptation method introduced in [20]. TCA learns transfer components across domains in a Reproducing Kernel Hilbert Space for establishing a representation transfer. With the new representation in the subspace spanned by these transfer components, standard machine learning methods are applicable to train classifiers or regression models in the source domain for use in the target domain.

迁移模型,映射数据到无环境无关,对抗学习与环境无关的鲁棒特征
The commonplace approach for tackling such heterogeneity, by using instance-specific labeled data to build individual & devicespecific classifiers, is clearly infeasible for practical societal-scale deployment. Instead, significant research has focused on techniques for automated domain adaptation – i.e., a transfer learning-based mechanism that allows a model trained on one domain to flexibly evolve and cater to data collected under a different domain/context, while requiring modest-to-no labeled training data from the target domain. A variety of approaches for such HAR-oriented domain adaptation have been suggested in recent years, including techniques that

  • (a) employ transfer learning to modify a source domain model with only modest amounts of target domain labeled data (Khan, Roy, & Misra, 2018; Qin, Chen, Wang, & Yu, 2019);
  • (b) map domain-dependent sensor values to a domain-independent, common low-dimensional latent space (Jeyakumar, Lai, Suda, & Srivastava, 2019); and
  • (c) use adversarial learning techniques to learn a set of robust features that are invariant to data from either training (source) or test (target domains) (Ganin & Lempitsky, 2014).
    In general, these techniques suffer from at least one of three key limitations: (a) They often require at least modest amounts of labeled target domain data, with their performance degrading sharply in the absence of any labeled data; (b) They require capture of synchronously paired data–i.e., the simultaneous capture of target and source domain data streams, as a means of implicit labeling (e.g., Akbari and Jafari (2019) and Jeyakumar et al. (2019)); (c) They require modification of the gesture classification model—while not technically difficult, this requirement presents practical difficulties as many ML-based activity models are now bundled as standard executable binaries by either OS or App developers.